Rankings like those in U.S. News have become essential tools for grad school applicants. And for good reason: they’re some of the handiest ways to evaluate which programs are the best fits for all kinds of prospective students.
But do published rankings affect the schools themselves? Recent research suggests that they do, according to this article in The National Law Journal.
While rankings are valuable means of differentiating between schools, the study argues that they also put pressure on graduate programs to raise (or maintain) their positions on national lists. One area that can be negatively affected is class diversity:
Administrators consistently reported that they have allocated more money toward merit-based scholarships in order to attract students with high LSAT scores, a factor that accounts for half of a school’s selectivity score. That leaves less money for need-based scholarships, which in turn can hurt student body diversity because applicants from lower income groups tend to have lower scorer LSAT scores, the researchers found.”
It’s easy to see how this could lead to a systemic problem. Schools can raise their rankings by increasing selectivity, but selectivity itself may not be a quality that leads to any educational benefits for students. When schools are pressured to allocate resources in certain areas, choices may be made at the expense of institutional improvements.
Shifting priorities is one way schools are affected, but there were even reports in the study of more illicit efforts to manipulate rankings:
The researchers found that some schools have employed ethically questionable tactics, such as categorizing students as part-time or probationary so their LSAT scores would not count […] Some schools cut first-year class sizes then aggressively recruit transfer students, the study found. Other schools hired graduates on a temporary basis so they would be considered employed for the U.S. News survey.”
These tactics are pretty extreme, but they highlight an inherent tension in the publication of rankings that are trusted to such a high degree. Is there a point where the rankings are so influential that they begin to undermine their purpose of objectively measuring the quality of schools?
It seems like some of these problems can be addressed by rankings’ publishers (U.S. News, for instance, considers many factors aside from selectivity and post-graduate employment rates). And in a certain way putting pressure on schools could be a good thing. If rankings only encouraged schools to improve course offerings and student resources, it’s doubtful that anyone would cry foul.
The question is, can a rankings system be designed so well that it only rewards positive changes in schools? The findings of this study suggest that there is definitely room for improvement.