The Economist has released its annual international ranking of full-time MBA programs. Data was collected during spring 2016. Two surveys were used: 1) The first was completed by schools with eligible programs which covered quantitative subjects and accounted for 80% of the ranking. 2) Current MBA students and a school’s most recent graduating MBA class completed a second qualitative survey which accounted for the remaining 20% of the rank.
The factors included:
• Percentage of graduates receiving a job offer within three months of graduation
• Student assessment of their program’s career services
• Quality of faculty
• Diversity of students
• Overall educational experience
• Post-MBA salary
• Percentage increase between pre- and post-MBA salary
• Breadth of alumni network
A total of 100 schools were ranked by The Economist. Seven of the top 10 (ranked 1-7) were schools in the United States. Here are the top 10 ranked schools:
I find The Economist ranking to be useful in so far as it highlights the flaws in rankings better than most. The rankings as indications of educational quality – their supposed purpose – are really only useful if your criteria for choosing programs match exactly the criteria of the particular medium presenting the rankings. Since published rankings almost never match an individual applicant’s criteria, or at least a thoughtful applicant’s criteria, their value is questionable. Their popularity and ability to attract eyeballs, online and off, is undeniable.
Regarding the specific failings of The Economist rankings, John Byrne of Poets and Quants wrote an in-depth critique. I’m not going to repeat it, but here are a few of the more significant points, some from me and some from P&Q.
1. The criteria are a potpourri of 21 metrics including the student satisfaction survey (20%). Again unless your criteria match The Economist’s methodology and weighting of those factors, it doesn’t do you a lot of good. And of course surveys are intrinsically subjective and prone to gaming.
2. The Economist doesn’t let you see the differences among the school or the breakdown of the different criteria. The lack of transparency contributes to the lack of usefulness. You can’t even select your own criteria and you can’t see whether two schools are smidgens or galaxies apart.
3. The volatility of the index calls into question its validity and indicates a certain arbitrariness in the criteria. Schools change slowly, but there were several roller-coaster-like changes in rankings, for example ESADE dropped 33 places, Florida Hough climbed 21 places, and SMU Cox climbed 16. Stanford is up by 8 to #5, which means that last year it was 13. Huh?
4. Some of the better and worse schools don’t make a lot of sense. For example, the University of Queensland in Australia is #10, ahead of Columbia (#11), Wharton (#12), INSEAD (#13), Yale (#15), MIT Sloan (#17), and London Business School (#25). While such people may exist, in my 20+ years as an MBA admissions consultant I have never had a client prefer Queensland over the other programs that it allegedly bests.
To The Economist’s credit, its methodology includes the following warning:
Results of rankings can be volatile, so they should be treated with caution. The various media rankings of MBA programmes all employ a different methodology. None are definitive, so our advice to prospective students is to understand the ethos behind each one before deciding whether what it is measuring is important for you.
I couldn’t agree more.
Are you using the rankings correctly? Click here to find out.
By Linda Abraham, president and founder of Accepted and co-author of the definitive book on MBA admissions, MBA Admission for Smarties: The No-Nonsense Guide to Acceptance at Top Business Schools.