Edmit 120 large.png

Featured Stories

Filter By Categories

How Reliable Are College Ranking Systems?

April 10, 2018

If you’re involved in the college applications process, you know how powerful college ranking systems are. Universities pursue high rankings to cultivate prestige and desirability; students seek admission to highly ranked schools as a sound investment for their future. While we’ve previously gone over  the methodology behind some of the more popular college ranking systems in the United States, we didn’t delve into  how powerful rankings can be in shaping people’s perceptions of institutional prestige.

 

A 2010 Bastedo and Bowman study, for example, found that future peer assessments of a college’s reputation were significantly influenced by early college rankings, even when they controlled for changes in organizational quality and performance. In other words, schools have a lot to gain (and lose) from college rankings, which help determine their reputation. In the world of higher education, where prestige often acts as academic currency, this can incentivize schools to game the college ranking system, manipulating or misreporting their data to make sure they are ranked higher than their peer institutions. So before you put too much trust in college ranking systems, take a look at some of their strategies that may have already influenced your perception.

 

College Rankings: Acceptance Rate Reliability

 

Many college ranking systems use measures of student selectivity to generate their results. Student selectivity is often determined by a school’s acceptance rate, as well as the SAT scores and class ranks of its enrolled students. Schools like Harvard and Yale, with their acceptance rates of 5.2 percent and 6.9 percent, respectively, are considered highly selective and are at the top of most college rankings. The problem with acceptance rates, however, is that they are fairly easy to manipulate. By recruiting more applicants, a school can easily lower its acceptance rate, thereby moving up a few spots in rankings without making any improvements to the quality of education it offers.

Colleges and universities have caught on to this loophole, and integrated it into recruiting practices. According to a 2014 Hanover Research report, colleges are spending more on recruitment than ever: “One of the most notable trends in higher education branding and marketing is that institutions are dedicating far more attention to these functions than in previous years.” This trend is indicative of a larger corporatization of higher education. In fact, “many universities have hired marketing professionals from the corporate world,” spending a significant amount of resources on creating and strengthening their brands.

It’s not by chance that colleges are acting increasingly like corporations, as ranking systems actively encourage this corporate mentality. Frank Bruni from the New York Times explains this phenomenon well when he states that by pitting colleges against one another, ranking systems “[foster] the idea that schools are brands in competition with one another.” When schools are hierarchized based on student selectivity, they are incentivized to recruit as many students as possible—even those who are unlikely to be accepted.

 

College Rankings: Test Score Reliability

 

The SAT and ACT scores of enrolled students are another common measure of student selectivity. For this reason, colleges have employed creative strategies to exaggerate their students’ high test scores. Northeastern University, for example, has made the news for its dramatic advancement in national ranking, going from #162 to #49 over the course of 17 years. The school’s success is attributed to its strategic focus on U.S. News and World Report’s undergraduate rankings, and one of its strategies was to stop requiring SAT scores from students attending international high schools. Because international applicants tend to score lower on the SAT, this approach allowed Northeastern to encourage applications from abroad, thereby lowering its acceptance rate without jeopardizing its test scores. Northeastern is one example of schools manipulating their data without being overtly dishonest about it.


Other schools, however, have blatantly lied about their admissions statistics. Claremont McKenna College, Bucknell University, Emory University, and George Washington University have all made headlines for admitting to having submitted incorrect data to U.S. News, arguably the most influential college ranking system in the United States. These scandals raise an important concern regarding college rankings: Many rely significantly on data submitted by the colleges and universities themselves, which are far from being disinterested third parties. Although ranking systems often verify or supplement this data from other sources (U.S. News obtains missing data from the Council for Aid to Education and the U.S. Department of Education’s National Center for Education Statistics, for example), much of their data still comes from schools who have significant stakes in the outcomes.

 

To make matters worse, rankings are highly ambitious projects. U.S. News, for example, ranked 1,388 colleges and universities in 2018. The sheer amount of data that is collected each year for these rankings makes it unlikely that they are being fact-checked to anyone’s satisfaction.

 

How Can You Determine the Overall Reliability of College Rankings?

 

It’s impossible to keep up with every higher education scandal. And despite their controversy, you might decide that college ranking systems are still a useful starting point in your college search. So how can students and parents determine reliability? One quick and easy way is to examine exactly what is being measured by a particular ranking system, and which data sets are of particular interest to you. Student selectivity, for example, reveals almost nothing about what a school provides for its students; rather, it reveals what the students are doing for the school. Measures of input such as acceptance rates, test scores, and class ranks tell you little about the quality of education that a college offers and only reinforce existing institutional reputations.

 

Focus on measures of output (e.g., graduation rates, postgraduate salary, ability to repay student debt) or school resources (e.g., the quality of faculty, the faculty-to-student ratio), which tend to be better indicators of a school’s value. Finally, avoid ranking systems which are largely based on a school’s reputation. For U.S. News, undergraduate academic reputation comprises 22.5 percent of a school’s total score. This metric ignores the fact that reputation is an extremely subjective and roundabout way of measuring a school’s value. What people think about a school is less important than actual statistics about the school itself. If you choose to be a consumer of college rankings, make sure you’re a smart and informed one by being aware of the various ways in which schools can game college ranking systems.