Rankings of educational programs, hospitals, physicians, cars, and other goods and services have become increasingly popular during the past 20 years. There is a robust market for various rankings because of the difficulty students, patients, and other consumers have in getting sufficient information about the numerous attributes being offered, such as quality of other students and faculty, size of classes, earnings of graduates, or mortality rates of hospital patients. I believe that on the whole rankings convey useful information about quality, although there are obvious problems in getting reliable rankings.
Perhaps the most serious problem with rankings is that institutions "game the measure". So if the ratio of admissions to acceptances were used, then as Posner indicates, schools might tend to admit applicants who do not have good alternatives. If hospitals are ranked partly by the death rate among patients, then hospitals have an incentive to shy away from admitting terminally ill patients, or those with difficult-to cure conditions. Yet schools and other organizations respond to their ranking position not only by gaming the measure, but also by improving what they provide. In this way, some business schools and colleges ranked low in the amenities and other characteristics of the learning experience provided students have responded by improving physical facilities and the guidance offered to students, reducing class size, and increasing networking. The issue in determining whether measures have on balance positive or negative value to consumers is whether the good information provided exceeds the misleading information, due in part to "gaming".
The difficulty for consumers is that not only do colleges, business schools, and hospitals provide credence products, but also that there is little or no repeat business since students do not go to the same college more than once, and few patients have multiple spells in the same hospital. Still, applicants to colleges (and/or their parents), and sick persons choosing hospitals tend to recognize that institutions have an incentive to game the measure. That weakens the quantity of information they believe they get from rankings based on particular measures, but it does not generally make the information worthless.
Any conclusion that rankings make the information available to consumers worse does not do justice either to the difficulty of making sensible decisions about education programs and medical help in the absence of ranking information, nor to the competitive search for different criteria to use in rankings of schools and medical care. A more accurate conclusion would be that the great interest in rankings, and the rapid expansion in the number of magazines, newspapers, and non-profit groups that provide rankings of schools, hospitals, doctors, and other goods and services suggests strongly that consumers believe they do get useful information from rankings. How much information they get varies with their access to other information.
Those profit and non-profit organizations that provide rankings compete by emphasizing different criteria. For example, the several newspapers and magazines that rank MBA programs weight differently evaluations of business recruiters, earnings of graduates, the increase in earnings of graduates compared to what they earned before enrolling, the amenities provided, the research of faculty, the attention to globalization issues, and so on. That rankers compete by using different criteria and weightings strongly suggests that significant numbers of applicants to schools consider how rankings are determined.
To be sure, there are ways to improve the basis of rankings that make them less vulnerable to gaming by the institutions being ranked (see the discussion of hospital rankings in Mark McClellan and Douglas Staiger, "Comparing the Quality of Health Care Providers", and other papers by these authors). For example, MBA programs can be compared not by earnings of graduates, but by the increase in earnings of graduates compared to what they earned before entering an MBA program. This shift in the earnings measure would help control for the quality of students in a program (the Financial Times' rankings of MBA programs are based partly on this measure of value added). To help determine what benefits students actually get from an MBA program or college, one should not only interview current students, but also those who graduated 3, 5, or 10 years ago. After several years of working or additional study, the effects of attempts to influence student evaluations through superficial amenities should have been replaced by a consideration of longer-term benefits.
The obvious interest in rankings by consumers suggests that they consider the rankings of schools and programs, or health care, or automobiles to be valuable. These rankings can be, and are being improved, but that they have survived the test of the marketplace indicates that consumers believe they are useful enough to be willing to pay for them.