The popularity of NCAA football continues to rise at an exponential rate. As revenues increase, the difference between a BCS bowl berth and a non-BCS bowl berth can be millions of dollars. Thus, the process of how schools are selected to play in a BCS bowl game is very important. In this paper, we analyze one of the components of the BCS ranking system: the Coaches’ Poll.
Data from the final regular season Coaches’ Poll from 2005 through 2010 were analyzed in order to explore whether coaches were biased in their voting in three different areas: voting for their own team, voting for teams in their conference and voting for teams from Non-Automatically Qualifying (N-AQ) conferences.
Through analyzing a Coach’s Difference Score (CDS), we found that coaches had a positive bias towards their own team. That is, they vote their own team higher than their peers. We also discovered that coaches tend to vote schools from their own conference higher than do coaches from outside that conference. Finally, we concluded that coaches from the six Automatically Qualifying (AQ) conferences were biased against schools from the smaller N-AQ conferences.
After discussing potential reasons why all these biases occur, several questions for future researchers to explore are put forth. Then, we make several suggestions to improve the voting process in order to make it as objective as possible.
Key Words: College Football Coaches’ Poll, Voting Bias, BCS Implications
Every year in college football, a debate occurs about which team should be ranked higher than another, and 2010 was no different. With three teams finishing their regular seasons undefeated, it was up to the Bowl Championship Series (BCS) rankings to determine which two teams would play for the national championship. While Auburn defeated Oregon in Glendale, Arizona, on January 10, 2011, and was crowned champion, fans over a thousand miles away in Fort Worth, Texas were left to wonder, “Could TCU have beaten Auburn?” Thus, the scrutiny of the BCS continues.
The BCS system was started in 1998 as a way to bring the top-two ranked teams face to face in a bowl game to determine a national champion (3). Prior to the BCS, the bowls tried to match number one versus number two, but with guaranteed conference tie-ins, such as that of the Pac 10 and the Big 10 to the Rose Bowl, it was not always possible. When the Rose Bowl relented, the BCS was born. According to the official BCS website, “The BCS is managed by the commissioners of the 11 NCAA Football Bowl Subdivision (“FBS”) (formerly Division I-A) conferences, the director of athletics at the University of Notre Dame, and representatives of the bowl organizations. The conferences are the Atlantic Coast, Big East, Big Ten, Big 12, Conference USA, Mid-American, Mountain West, Sun Belt, Pacific-10, Southeastern and Western Athletic” (3).
As of 2005, the BCS standings are determined by averaging three different rankings: the Harris Poll, computer rankings and the Coaches’ Poll. The Harris Poll is run by a marketing research firm, Harris Interactive, and is “comprised of 114 former college football players, coaches, administrators and current and former members of the media…randomly selected from among more than 300 nominations” (10) from the FBS. The final computer ranking used is an average of the rankings from six different firms/individuals that mathematically calculate a team’s ranking based on wins, strength of schedule, etc. (3). The Coaches’ Poll is run by USA Today and the American Football Coaches Association (AFCA) and is approximately 60 coaches—50% of the coaches in each conference are randomly selected to vote (18).
This research explores one component of the BCS: the Coaches’ Poll. In particular, we investigate to what extent coaches have been biased in their voting. Bias, as defined herein, is considered to be present when a coach ranks a team significantly different than the other voting coaches in the poll. Why is this important? With teams often being separated by a few tenths of a point in the BCS standings, ensuring the integrity of the rankings is critical. The BCS standings can determine a team’s bowl game and/or a coach’s bonus. For example, Iowa coach Kirk Ferentz received a $225,000 bonus for finishing in the Top 10 BCS rankings in 2009, and another $175,000 bonus for playing in the BCS Orange Bowl that season (8). In 2010, the BCS bowl payout was 17 million dollars (6) with the non-BCS bowl payout being much less (e.g., The 2010 Capital One Bowl had the highest non-BCS Bowl payout of 4.25 million dollars (6)). So, biased decisions may not only affect the coaches, who make these decisions, but other coaches and universities, as well.
Prior to 2005, the coaches’ votes were not made public. Then, in response to added pressure for transparency, a vote by all FBS coaches made the final regular season Coaches’ Poll public by agreeing to have the ballots published in USA Today (7). However, the decision was not unanimous. According to Texas coach Mack Brown, who was initially not in favor of making the votes public, “It can put coaches in a difficult situation” (7). How did the first year of public voting go? According to Sports Illustrated writer, Stewart Mandel, it was “the equivalent of a high school student-council election” with “Oregon coach Mike Bellotti, his team about to be squeezed out of the BCS by Notre Dame, placing the Ducks fourth and the Irish ninth,” and “Arkansas coach Houston Nutt ranking SEC rival Auburn third and Big East champion West Virginia … nowhere.” (14). Even Coach Steve Spurrier of the University of South Carolina has questioned the validity of the Coaches’ Poll remarking, “I guess we vote ‘cause college football is still without a playoff system. I really believe most coaches do not know a whole lot about the other teams” (9).
With increasing scrutiny of the coaches’ voting patterns, the AFCA hired The Gallup Organization in early 2009 to analyze the coaches’ voting and make recommendations. “The perception is that there’s a huge bias, and we’ve never really found that,” claimed former Baylor coach and current AFCA Director Grant Teaff (2). One of Gallup’s key recommendations was to make the coaches’ final regular season votes private. However, after seeing the response to a USA Today poll of over 4,000 readers that found 79% of fans felt the coaches’ final regular season votes should remain public as “it is important they are accountable,” (20) the AFCA put the decision to a vote of all FBS head coaches, and the results indicated that the final regular season votes should remain transparent. Consequently, the AFCA changed their mind and kept the final regular season votes public in 2010 (1).
Even with the continued visibility of the voting, one thing remained consistent in 2010: scrutiny. For example, in the final vote of 2010, one coach returned his ballot with TCU ranked number one, which is against the AFCA rules (the AFCA instructs every coach to list the winner of the BCS National Championship Game as the top ranked team) and two other coaches in the poll failed to turn in their ballots at all (18). While the final votes of each coach are not made public, these types of mishaps still fuel the debate: Should coaches have a part in the BCS rankings?
Previous researchers have discovered that individuals can be biased towards others in society (11, 12), and that people can also be biased when voting (16, 17). Specifically, researchers have examined voter bias in college football polls. For instance, Coleman et al. (5) concluded that voters in the 2007 Associated Press college football poll were biased in a number of different ways, including voter bias toward teams in their home state. In another study, Campbell et al. (4) discovered that “the more often a team is televised, relative to the total number of own- and opponent-televised games, the greater the change in the number of AP votes that team receives,” (p. 426) when they analyzed the AP votes from the 2003 and 2004 college football seasons. A study by Paul et al. (15) also examined AP voting bias, but included coaches’ voting as well. Their research looked at both of these polls from the 2003 season, and they determined that the spread or betting line on a game is “shown to have a positive and highly significant effect on votes in both polls. A team that covers the point spread will receive an increase in votes in both polls. A team that wins, but does not cover the point spread, will lose votes” (15, p. 412). In 2010, Witte and Mirabile (21) extended the literature by examining several seasons of Coaches’ Poll data, and they concluded that voters tended to “over-assess teams who play in certain Bowl Championship Series (BCS) conferences relative to non-BCS conferences” (p. 443).
While research on the voting bias in the college football polls exists, few researchers have investigated the bias in the Coaches’ Poll to any great depth. Hence the purpose of this research is to determine if college football coaches are biased when they vote and, if they are, what kind of biases they hold. Specifically, we look at three areas of potential bias put forth in the following null and alternate hypotheses:
H1o: Own-School Bias – Coaches do NOT rank their own teams significantly different than other coaches voting.
H1a: Own-School Bias – Coaches do rank their own teams significantly different than the other coaches voting.
H2o: Own-Conference Bias – Coaches do NOT rank teams within their conference significantly different than coaches from outside the conference vote those same teams.
H2a: Own-Conference Bias – Coaches do rank teams within their conference significantly different than coaches from outside the conference vote those same teams.
H3o: N-AQ Bias – Coaches from schools in the AQ conferences do NOT rank N-AQ teams significantly different than the N-AQ coaches voting.
H3a: N-AQ Bias – Coaches from schools in the AQ conferences rank N-AQ teams significantly different than the N-AQ coaches do.
Combining the three hypotheses, a model for Coaches’ Voting Bias is shown in Figure 1.
The first hypothesis investigates whether coaches can be objective when ranking their own teams. Do coaches rank their own teams about the same as other coaches rank that team, or, do coaches tend to over-estimate their own team’s ranking? The second hypothesis explores whether coaches rank teams in their own conference impartially. Many times a team’s quality of wins and losses can impact the perception of how good they are, and if a coach makes teams in their conference look superior to those from other conferences, perceptions of the strength of their own team may increase. Our final hypothesis examines what is commonly called “big school bias.” Namely, do coaches from the six traditional power conferences that have automatic qualification tie-ins with the BCS (AQ teams) tend to underestimate the strength of teams from the smaller conferences, the winners of which do not automatically qualify for a BCS bowl (N-AQ teams)?
The sample for this research was the final regular season coaches’ ballots for the 2005 through the 2010 college football seasons published in the USA Today Coaches’ Poll. In each of these years, a coach who is selected to vote ranks his top 25 teams by awarding 25 points to his top ranked team, 24 points to his second ranked team and decreasing in a similar manner until the 25th ranked team is awarded a single point. Appendix A lists the various coaches and the years during which each has been a member of the USA Today Coaches’ Poll. Table 1 aggregates the data by conference.
Because the number of coaches who vote each year varies slightly, a simple linear transformation of the total point system was employed herein by calculating a voter’s “difference score,” which is the average number of points a team received subtracted from the points that the voter awarded them. For instance, in 2008, the #20 Northwestern Wildcats received a total of 334 points and 61 coaches voted in that poll. So, Northwestern’s average points per coach is calculated as 334/61 = 5.475. In that poll, Coach Bret Bielema of Wisconsin gave Northwestern 8 points. Thus, his difference score would be 8–5.475 = 2.525. On the other hand, Coach Art Briles of Houston gave Northwestern only 4 points that year resulting in a difference score of 4 – 5.475 = -1.475.
In general, a positive difference score suggests that a coach ranked a team higher than that team’s average score, while a negative difference score indicates that a coach ranked a team lower than its average score. A small difference score represents a case where a coach has ranked a team very close to the average ranking of his peers. In contrast, a large difference score would suggest a coach disagreed with his peers about where a team should be ranked. The total of the 25 difference scores for each individual coach will sum out to zero each year as every time a coach votes a team higher than his peers, he must vote another team (or a combination of teams) lower than his peers. Likewise, when all the coaches’ difference scores for a single team are summed, there will be a difference score of zero (i.e., for every coach that votes a team higher than their final average, there must be a coach, or combination of coaches, that votes that team lower). Thus, the key unit of analysis in this study is a term we have labeled the Coach’s Difference Score or CDS.
One thing to note about the Coaches’ Poll is just how few votes can separate teams. Roughly seventeen percent of the point differences between two contiguous positions in the poll were determined by fifteen or fewer points. In fact, in fifteen occurrences, which is an average of 2.5 per year, less than six total points separated two teams, including in 2008 where a single point separated #1 Oklahoma (1,482 points) and #2 Florida (1,481 points).
In order to test Hypothesis 1, which explores whether there are any discernible patterns in how coaches rank their own schools, we employed a simple t-test. If there are no significant biases (as a collection), coaches that vote on their own teams will have a mean CDS score of zero (i.e., for every coach that ranks his team higher than his peers, a corresponding coach would rank his team lower than his peers). However, if coaches tend to error consistently on one side or the other from their peers, then the difference score for those coaches will not be equal to zero. We tested each of the six years individually and collectively. The results are summarized below in Table 2.
As illustrated in Table 2, in all six years, the CDS, which ranged from a low of 1.61 (in 2008) to a high of 3.12 (in 2007), were significantly different than zero at a p < .01 level. This result leads us to reject the null hypothesis that there is no bias in the way coaches vote their own school. The result indicates that coaches do tend to rank their own teams significantly higher than do other coaches. Over the entire sample, coaches, on average, ranked their own team 2.32 positions higher than did their peers. We explored this result further by performing an ANOVA test across the six periods to see if any one year’s bias was significantly higher or lower than the other years. The ANOVA result was not significant (F = 1.083, p = .373) leading us to conclude that there is no statistical evidence to suggest that the bias changes from one year to the next. While this result might seem trivial on the surface, it tells us that no matter how much the composition of the voting group changes (e.g., only eight of the sixty-two coaches that voted in 2005 were still voting in 2010), the coaches vote in a fairly homogeneous way when it comes to ranking their own teams.
In order to test Hypothesis 2 for within conference bias, we assessed the CDS for each voting coach, with regard to their respective conference members. To control for own-school bias, we did not include a coach’s own team in the analysis. We tested for this own-conference bias for individual seasons, as well as, collectively, for the entire span of years examined. We employed the same set-up and methodology (i.e., a t-test) that we used to test H1. Table 3 reveals the results of these t-tests.
All of the t-test results were statistically significant (p < .001) leading us to reject the null hypothesis. This result suggests that coaches do rank their own conference members higher than do coaches outside the conference. While the CDS overall mean of 1.19 might seem small, keep in mind that some conferences have as many as seven voting members, and others as few as three, which could lead to an average favorable bias of nearly 5 points (1.19 * (7 – 3)) for the teams in a conference with seven voting members.
To explore this bias further, we conducted two additional ANOVA tests. The first to discover whether the mean CDS had changed across the six years and the second to determine if any one conference’s coaches have a higher CDS than those of other conferences. With an F-statistic of 4.286, the first ANOVA test was significant at the p < .001 level. A post-hoc Tukey test (p < .05) indicated that own-conference bias was significantly higher in 2007 (with a mean of 1.95), than in each of the other years. This result might be explained by noting that, in 2007, there were only two teams from the traditional AQ conferences, Ohio State and Kansas, which had one loss or less, while a total of ten teams had two losses, potentially making it very difficult to sort out what schools rounded out the Top 10. As a result, coaches ranked the schools with which they were familiar (their own-conference schools) higher than other schools. That year was the only one within our sample range that had such a grouping of teams with similar records. Sixteen different teams received top 10 votes that year, which was the largest amount for any year.
We then turned our attention to analyzing own-conference bias broken down by conference membership. Table 4 gives the descriptive statistics for each conference. The ANOVA analysis suggests that we cannot accept the null hypothesis of equal bias across conferences (F = 4.286, p < .001). Post-hoc Tukey comparisons reveal that the own-conference bias of voters from the WAC was significantly greater than the bias from coaches in the ACC, Big 10, Big 12, C-USA, MAC, Pac 10 and SEC (p < .01). The primary beneficiaries of this effect were Hawaii in 2007, with the four WAC coaches voting Hawaii an average of 5.27 positions higher than non-WAC coaches, and Nevada in 2010, with the four WAC coaches voting Nevada an average of 5.4 positions higher than the non-WAC coaches.
Our final hypothesis, H3, investigates whether bias exists in the way coaches from AQ conferences, whose champions automatically qualify for a BCS spot, vote versus the way coaches from conferences that do not have automatic tie-ins, referred to as N-AQ conferences, vote. The six AQ conferences include: ACC, Big 10, Big 12, Big East, SEC and Pac 10. The remaining five conferences are categorized as N-AQ: C-USA, MAC, MWC, Sun-Belt and WAC. Ultimately, we are testing what many journalists call “big school bias”–whether or not coaches from the six AQ conferences are biased against the smaller N-AQ schools. To test for this bias, we assessed how coaches from the AQ schools ranked the N-AQ schools, compared to how coaches from the N-AQ schools ranked the other N-AQ teams. If there is no bias present, the means of the CDS of the two groups of coaches would be equal to each other. In order to control for the previous bias that we have demonstrated, we removed how an N-AQ coach votes on his own team and teams within his conference. For example, for Gary Patterson, the head coach of Texas Christian University (TCU), we analyzed his voting record for all the schools from C-USA, MAC, Sun-Belt and WAC, but did not include his voting record on schools from the MWC, the conference that Patterson’s TCU team played in during this time period, or his voting record for TCU. We performed this test on each of the six years, individually, as well as collectively. The results are presented below in Table 5.
In five of the six years, there was a statistically significant bias (p < .05). The largest amount of bias occurred in 2007, when AQ coaches ranked N-AQ teams an average of 1.92 spots below the positions assigned them by N-AQ coaches. The only year without bias was in 2009. An investigation of this year showed a couple of possible explanations. First, in two of the three previous years, N-AQ teams had significant BCS bowl wins. For example, after the 2006 regular season, Boise State defeated, then #8, Oklahoma in the Fiesta Bowl, and after the 2008 regular season, Utah beat, then #4, Alabama in the Sugar Bowl. Second, during the first few weeks of the 2009 season, when teams generally play out-of-conference games, several teams from N-AQ conferences had wins over good teams from AQ conferences: TCU beat a Clemson team that would win nine games and go on to win their division in the ACC, Boise State beat the #16 ranked Oregon Ducks, a team that won the Pac 10 and went on to play in the Rose Bowl, and BYU beat then #3 ranked Oklahoma. These high profile wins may have played a significant role in reducing the bias against N-AQ teams.
When the six-year period is looked at collectively, the AQ coaches ranked the N-AQ teams 0.80 places lower than the N-AQ coaches ranked those same teams (p < .001). While this result might, at first, seem like a small margin, recall that as Table 1 shows, in an average year, there are 10.5 more AQ coaches than N-AQ coaches voting—and the resulting bias can thus have a significant effect on the overall point totals and rankings.
This study demonstrated that coaches who are selected to vote in the USA Today Coaches’ Poll are subject to at least three different kinds of bias. First, coaches are biased toward their own teams. On average, coaches rank their own school 2.32 positions higher than do their peers. Indeed, the effect is so prevalent that 92.1% (or 82 out of 89) of coaches whose school finished in the top 25 ranked their own school higher than the average of the other coaches. In two years, 2007 and 2010, every single coach ranked their team higher than its final position. Twenty-eight of the 89 coaches (31.5%) ranked their school at least three positions higher than the average of their peers, and 11 of 89 (12.4%) voted their team at least five positions higher. One coach even voted his team 9.71 positions higher than the average of the other coaches’ rankings. In contrast, the maximum amount a coach ranked his team lower than did his peers was 1.18 positions. This bias seems to be a natural phenomenon. Social psychologists have extensively studied the concept of illusory superiority (11, 12), which describes how an individual views him or herself as above average, in comparison to their peers.
The second form of coaches’ bias found was bias toward their own conference. Over the six-year period from 2005 to 2010, coaches voted their conference members 1.19 positions higher than their average ranking. Representative examples of this type of bias are worth discussing. For example, in 2009, Mississippi received 87.5% of their total points from SEC conference coaches, who made up less than 12% of the voters. Similarly, in 2008, the Iowa Hawkeyes received 62% of their votes from Big Ten coaches, who made up less than 10% of the voting population. Further evidence of this effect is apparent when you compare two teams that finished very close in the rankings. For example, in 2009, Oregon and Ohio State were #7 and #8, respectively, in the poll, and they were separated by only 19 points. All five PAC 10 coaches voted Oregon ahead of Ohio State, while four of the five Big 10 coaches voted Ohio State ahead of Oregon (Interestingly, Jim Tressel, the coach of the Buckeyes was the only Big 10 coach to put Oregon ahead of Ohio State). A similar phenomenon happened in 2010 when Oklahoma and Arkansas were tied for the #8 ranking. Six of the seven Big 12 coaches voted Oklahoma higher, while five of the six SEC coaches voted Arkansas higher.
When comparing the bias across conferences, the WAC was found to be the most biased voting their own teams on average 2.97 places higher. Perhaps, this is due to the WAC coaches trying to overcompensate for the perceived bias that other voting coaches have against this N-AQ conference. In 2007, Hawaii went undefeated, yet finished #10 in the overall rankings behind seven teams from AQ conferences that had two losses, in large part due to the voting of AQ coaches. A similar result occurred in 2006 when Boise State went undefeated and finished #9 in the rankings behind three AQ teams with two losses.
The third form of bias discovered in this research was that of coaches toward N-AQ conference teams. Looking more closely at the numbers shows that while this bias does exist, it seems to be diminishing over time. The effect was at its highest in 2007 when AQ coaches ranked N-AQ teams 1.92 positions lower than did the N-AQ coaches. As previously mentioned, there was no significant difference in the CDS with regard to N-AQ teams in 2009, and this might be due to some significant wins N-AQ schools have had over AQ schools in recent years. Moreover, it will be interesting to see if the N-AQ bias is further reduced in the 2011 season after TCU’s defeat of Big 10 Champion Wisconsin in the 2011 Rose Bowl, which led the Horned Frogs to a #2 ranking in the final standings—the highest for any N-AQ team during the period we surveyed. The Rose Bowl win brought the N-AQ teams to a very impressive five wins to two losses in their BCS Bowl appearances. Their 71.4% winning percentage is higher than that of any of the AQ conferences.
There are a number of limitations to this study. For one, the data was limited to the final regular season USA Today Coaches’ Poll, as that is the only data made public. If more data is provided in the coming years, future researchers will be able to investigate whether coaches’ bias varies throughout the season. Greater availability of data will also allow researchers to use more sophisticated time series data analysis techniques, such as logistic regression. Secondly, the sample sizes for some of our subgroups was rather small. For example, because only one MAC team made the top 25 rankings, only five MAC coaches’ votes were used to assess the MAC’s own-conference bias. As more data is collected over time, and possibly more MAC teams make the top 25 poll, future researchers can replicate this study on a larger sample.
There are many fruitful areas remaining for future researchers to continue exploring bias in the Coaches’ Poll. Researchers can analyze where bias is the strongest – are coaches most biased when ranking teams in the top third of the standings, the middle third, or the bottom third, etc.? Previous research has shown that TV exposure impacts how media members vote (4); future researchers can determine whether it has any effect on the way coaches vote. The 2011 and 2012 seasons will see a shift in conference membership. Future researchers can attempt to discover what effect this has on bias. One particular study could examine Utah and TCU–two N-AQ teams who are moving to AQ conferences, the PAC 10 and the Big East, respectively. Will AQ coaches now see these teams as AQ teams or will they continue to see them, and thus penalize them, as being N-AQ teams? Finally, a last ripe area for exploration involves gathering the coaches’ opinions on the subject. Do coaches think that they themselves are biased? Do they think their colleagues are biased? And if coaches do think other coaches are biased, do they try to compensate for it?
One thing is certain. The current BCS system has flaws, which leads to frequent fan and media criticism. While every system, including a playoff, has advantages and disadvantages, the BCS should continually evaluate itself in an effort to make improvements. If it does not, the scrutiny will only increase over time. For example, Wetzel, Peter and Passan’s 2010 book, Death to the BCS, has garnered much attention in the media. The authors refer to the BCS as an “ocean of corruption: sophisticated scams, mind-numbing waste, and naked political deals” (19). In fact, after reading this book, Dallas Mavericks’ Owner, Mark Cuban, formed his own company in late 2010, in an effort to create a play-off system that would challenge the BCS in the future (13).
In our opinion, BCS officials should consider making several changes. For one, they should use an email-based ballot to make it easier for coaches to vote, instead of the antiquated phone-in ballot system currently used. Moreover, they should not require all ballots to be turned in so soon after the weekend games. Coaches simply do not have enough time to thoroughly analyze all of the teams within 24 hours of finishing their games. The BCS could consider moving the voting deadline to later in the week. Secondly, coaches should not be allowed to vote for their own team–if this rule were implemented own-school bias would be eliminated. These last two recommendations are not new; both were made by Gallup when they were hired by the AFCA to examine the Coaches’ Poll in 2009 (20). While the AFCA decided not to implement them, we feel that given the dollars involved in the BCS rankings, these would be easy improvements to the system. Lastly, why not let every FBS coach vote? Normally, a sample or sub-set of the population is used due to the expense of a census. But, in this case, the population is not very large, with only 120 FBS coaches, nor is the process very complex or time-consuming. By allowing all coaches to vote, it may help reduce the amount of own-school benefit that about half of the teams are currently receiving. Moreover, as most conferences are roughly the same size, this measure would also help reduce the disparity in the number of voters from each conference, thus minimizing the effect of own-conference bias.
Overall, our research has highlighted some important issues with the Coaches’ Poll. Bias in voting has occurred in the political arena in many different forms (16) and researchers have discovered that the amount of information voters possess can impact voting preferences (17). Perhaps the AFCA could do the same with voting coaches using our research results. If the coaches were to see how much bias occurs and the different forms of bias that are present in the voting, they may be encouraged to vote more objectively.
Under the current system, we found three different forms of bias present in the USA Today Coaches’ Poll: bias toward own-team, bias toward own-conference and bias toward teams in N-AQ conferences. These are significant findings as the Coaches’ Poll is an important part of the BCS standings that accounts for one-third of the BCS formula; a formula that, in turn, can mean the difference between a team going to a bowl with a payout of $17 million versus a fraction of that amount.
This research has several applications for those in sport. For one, BCS and college football administrators now have a better understanding of the biases that coaches employ (intentionally or unintentionally) when voting. Hopefully, some changes, as suggested in our Conclusion section, can be made to improve the process. In addition, sport management researchers and students can continue to analyze the numbers in the future to investigate other forms and levels of bias, now that this study has provided a framework, namely the CDS, as a basis for voting comparisons.
A Model of Coaches’ Bias in Voting
Voter Composition by Conference
T-Test Results of Own-Team Bias
|Year||Mean Difference||Std. Error||df||t-stat||Significance|
T-Test Results of Own-Conference Bias
|Year||Mean Difference||Std. Error||df||t-stat||Significance|
Descriptive Statistics for Own-Conference Bias
|Conference||N||Mean Difference||Std. Error|
T-Test Results of AQ vs. N-AQ Bias
|Year||AQ Mean||N-AQ Mean||difference||t-stat||significance|
Coach Composition of Coaches’ Poll
Team Rankings in the Coaches’ Polls Analyzed
Assistant Professor, College of Business
University of Dallas
Associate Professor, College of Business
University of Dallas
Scott Wysong, Ph.D.
Associate Professor, College of Business
University of Dallas
1845 E. Northgate Dr.
Irving, TX 75062