Summary: For better or for worse, rankings of institutions, such as universities, schools and hospitals, play an important role today in conveying information about relative performance. They inform policy decisions and budgets, and are often reported in the media. While overall rankings can vary markedly over relatively short time periods, it is not unusual to find that the ranks of a small number of “highly performing” institutions remain fixed, even when the data on which the rankings are based are extensively revised, and even when a large number of new institutions are added to the competition.
We endeavor to model this phenomenon. In particular, we interpret as a random variable the value of the attribute on which the ranking should ideally be based. More precisely, if items are to be ranked then the true, but unobserved, attributes are taken to be values of independent and identically distributed variates. However, each attribute value is observed only with noise, and via a sample of size roughly equal to , say. These noisy approximations to the true attributes are the quantities that are actually ranked. We show that, if the distribution of the true attributes is light-tailed (e.g., normal or exponential) then the number of institutions whose ranking is correct, even after recalculation using new data and even after many new institutions are added, is essentially fixed. Formally, is taken to be of order for any fixed , and the number of institutions whose ranking is reliable depends very little on . On the other hand, cases where the number of reliable rankings increases significantly when new institutions are added are those for which the distribution of the true attributes is relatively heavy-tailed, for example, with tails that decay like for some . These properties and others are explored analytically, under general conditions. A numerical study links the results to outcomes for real-data problems.