When U.S. News &World Report first published the “Best Colleges” issue in 1983, it had no idea what impact it would have on college applicants (“What college rankings really measure – hint: It’s not quality or value,” the conversation.com). Since then other magazines and newspapers have weighed in with their rankings. But I maintain that far greater attention needs to be paid to the criteria used before reaching a conclusion about what the various rankings really mean.
Perhaps the most egregious example is using SAT and ACT scores to rank schools. These two standardized tests do not allow valid inferences to be drawn. The first to question their predictive value was Bates College, which in 2004 released the results of its 20-year study finding virtually no differences in the four-year performance and on-time graduation rates of 7,000 submitters and non- submitters. Today, some 1,000 schools have instituted a test-optional policy.
In light of the skyrocketing cost of tuition, a more defensible way of ranking colleges is to determine what the money buys in terms of student learning, which after all is why students ostensibly go to college in the first place. I think this avoids penalizing liberal art colleges that largely focus on the liberal arts, rather on more immediately marketable subjects. According to the 2011 book “Academically Adrift,” however, college students don’t learn much because they don’t study much.
So why do rankings continue to get the coverage they do? I think it’s because Americans are obsessed with differentiation. For example, they study the standings of athletic teams. Everybody wants to be No. 1 in order to have bragging rights. College presidents and their boards of trustees are no different in this regard, particularly when a high ranking results in greater alumni giving. Follow the money trail and it will almost always lead to the answer.
(To post a comment, click on the title of this blog.)