Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-16T17:10:10.190Z Has data issue: false hasContentIssue false

Do Rankings Matter? The Effects of U.S. News & World Report Rankings on the Admissions Process of Law Schools

Published online by Cambridge University Press:  01 January 2024

Rights & Permissions [Opens in a new window]

Abstract

In recent years, there has been a tremendous proliferation of quantitative evaluative social measures in the field of law as well as society generally. One of these measures, the U.S. News & World Report rankings of law schools, has become an almost obsessive concern of the law school community, generating a great deal of speculation about the effects of these rankings on legal education. However, there has been no attempt to systematically ascertain what, if any, effects these rankings have on the decisionmaking of students and schools in the admission process. This article documents some of these effects by conceptualizing rankings as a signal of law school quality, investigating (1) whether students and schools use this signal to make decisions about where to apply and whom to admit, and (2) whether the creation of this signal distorts the phenomenon—law school quality—that it purports to measure. Using data for U.S. law schools from 1996 to 2003, we find that schools' rankings have significant effects on both the decisions of prospective students and the decisions schools make in the admissions process. In addition, we present evidence that the rankings can become a self-fulfilling prophecy for some schools, as the effects of rank described above alter the profile of their student bodies, affecting their future rank. Cumulatively, these findings suggest that the rankings help create rather than simply reflect differences among law schools through the magnification of the small, and statistically random, distinctions produced by the measurement apparatus.

Type
Articles of General Interest
Copyright
© 2006 Law and Society Association.

Over the last 15 years, there has been a great increase in the number of rankings of educational institutions published by widely circulating magazines and newspapers both in the United States and internationally.Footnote 1 Part of a general trend toward increased accountability and transparency through the development of social measures,Footnote 2 this proliferation of rankings has generated much concern about their validity, how students use them, and how the behaviors of schools are changed in reaction to them. Nowhere is this concern more palpable than in the field of legal education. Perhaps because there is a single publication that dominates the field of law school rankings—U.S. News & World Report (hereafter, USN)—or because every accredited law school (as opposed to just the top 25 or top 50 schools in most other fields) is ranked on a single dimension, law schools and their governing organizationsFootnote 3 have made very public efforts to caution their constituencies about and discredit the rankings. But while this concern about the rankings has created a substantial amount of speculation and debate about the methodology, validity, and appropriateness of the rankings of education institutions (e.g., Reference Klein and HamiltonKlein & Hamilton 1998; Reference BergerBerger 2001; Reference SchmalbeckSchmalbeck 2001), there have been few studies—and none that have focused on the field of legal educationFootnote 4—that have attempted to determine whether or not these rankings actually affect the decisionmaking of the prospective students who are the primary audience of these publications or the behavior of the schools that the rankings evaluate.

The question of the effect of these rankings, however, is an important one, and the implications of the answer stretch beyond the boundaries of education. Rankings, in the language of economists, act as signals—observable indicators, such as price (Reference Milgrom and RobertsMilgrom & Roberts 1986), advertising (Reference IppolitoIppolito 1990), or warranties (Reference Boulding and KirmaniBoulding & Kirmani 1993), of the underlying quality and properties of that which is being represented (Reference NelsonNelson 1970; Reference SpenceSpence 1974). Signals, according to this view, are especially valuable in markets such as legal education, where quality is hard to measure and information is difficult for outsiders to gather themselves. Thus, rankings are especially useful to prospective law students because they provide clear (although, as we will discuss below, not necessarily accurate) indications of the underlying quality of law schools, a function that both proponents of rankings (e.g., Reference KorobkinKorobkin 1998) and the rankers themselves cite as sufficient justification for the rankings.

But the relationship between the signal and the quality it is signaling is not always pure. For example, recent work by economic sociologists (e.g., Reference PodolnyPodolny 1993; Reference Benjamin and PodolnyBenjamin & Podolny 1999) has pointed out that signals often become decoupled from what they are supposed to represent and can affect the behavior of actors independent of the underlying quality that the signals are designed to indicate. This argument resonates with criticisms made by opponents of the rankings who, although they do not formulate their positions in the language of economics, contend that the rankings are inaccurate and methodologically flawed: that is, that the distribution of signals produced by the rankings does not correspond well to the actual distribution of quality among law schools but continues to influence the behavior of students and administrators.

Moreover, a close look at rankings draws attention to a separate potential problem concerning the use of these types of signals as proxies of actual quality. Namely, setting aside questions of accuracy, does the process of communicating these signals itself have independent effects on the objects it is supposed to be measuring? Signaling theory assumes that this communicative process is a neutral one; in the case at hand, this theory would posit that the signals produced by rankers simply reflect law school quality as they, the rankers, define it. But given the extremely precise distinctions made by the rankers, and in light of recent research documenting the powerful symbolic effects of quantification and commensuration (Reference PorterPorter 1995; Reference EspelandEspeland 1998; Reference Espeland and StevensEspeland & Stevens 1998), whether or not the communication of these signals has an independent effect on the institutions it is measuring—by, for example, amplifying small differences between schools—becomes a question of growing relevance as rankings of all sorts proliferate within and beyond the legal world.Footnote 5

In this article, we conduct a statistical analysis of law school rankings to address both the practical and the theoretical issues outlined above. First, using school-by-school applicant data and ranking data from USN, we analyze whether the rankings have affected the admissions process at law schools. Here, we focus on two questions: (1) is the behavior of prospective students affected by a school's USN rank? and (2) is the behavior of law schools altered in response to their rank? Next, we use the same data to examine whether the signals produced by the rankings affect the quality of a school's applicant pool independent of other factors. We do this by testing whether changes in a school's rank one year have reverberating effects on subsequent applicant pools; a “spiraling” scenario such as this would exemplify how these signals at least partially determine (and therefore distort) the law school quality that they purport to simply reflect or measure.

The Debate About Rankings

Rankings as Valuable Signals

While educational quality is a crucial factor in the decisionmaking process of prospective students, it is also a very difficult and costly factor for these students to gather direct information about on their own. Faced with this situation of high uncertainty, many prospective students will search for signals of educational quality. In the past, these signals most commonly included informal networks, pre-law advisors, and a few guidebooks that provided accounts—primarily qualitative in nature—of law schools.

Since 1990, however, the annual rankings of law schools published by USN have provided a new type of signal for legal education quality, a signal that is not only easily accessible (due to the wide circulation and relative inexpensiveness of the magazine) but is also presented in a format (precise relative valuations) that is compelling to outside audiences. Providing this new type of comparative information is, according to USN, the motivation behind the rankings. As the magazine asserted in an early rankings issue, “The sad truth is that it is easier to learn about the relative merits of compact-disk players than it is to compare and contrast America's professional schools. And some educators prefer to keep it that way” (19 March 1990, p. 50).

This argument—that the rankings provide a valuable and previously unavailable signal of educational quality to prospective students and other constituents—is also put forward by analysts who believe that rankings are beneficial to legal education. For example, Reference BergerBerger (2001) reasons that rankings provide useful, convenient, and plausibly accurate signals to law school applicants, signals that also force law schools to be accountable for the legal education that they provide in a way that they were not prior to the rankings. Reference KorobkinKorobkin (1998) advocates for the signaling function of rankings even more explicitly in his defense of the value of rankings. He argues that that the accuracy of the rankings is relatively unimportant; instead, the true worth of the rankings is to provide market signals that serve to match the best students with the best employers, a process that Korobkin believes is of great benefit to legal education.

Although many law school administrators and faculty agree that there have been some positive consequences of the rankings—most often citing the information provided to students and the institutional transparency or accountability that these rankings create—those who believe that these benefits outweigh the negative effects are in the minority. As one of the few deans who supported the rankings explained,

Before the rankings, there wasn't a number that was running around; in the past a dean could pontificate about how great his program was but now it's harder to pull the wool over people's eyes. With these numbers, you can't just talk. The basic things that law schools do are still all there: we want to get the best students, the best faculty, and we want our students to be successful. Our job and our career goals haven't changed, but now we have metrics. I think it's just like Consumer Reports for cars. You can quarrel with individual things, you can quibble with the formula, but we have a wonderful product and it's good for people to know. Most deans think all of this is horrible—I'm a real outlier on this.Footnote 6

Rankings as Distorting Signals

As the end of the previous statement implies, most in the law school community are not as sanguine about the effects of the rankings as USN and its proponents. While few question that the USN rankings provide a new signal of law school quality, opponents of the rankings believe that this signal also creates negative effects that overshadow any informational benefits. For example, the rankings, according to many law school administrators, change how both resources are distributed and work is done within law schools because administrators feel pressure to make decisions based on what is best for the school's rank rather than what is best for the quality of the education provided by the school. Specifically, interviewed deans have noted a dramatic increase in money spent on marketing and advertising, a much greater emphasis on LSAT scores in the admissions process, a transition from need-based to merit-based scholarships, and the transformation of the focus of career services from providing career counseling to ensuring that employment numbers are as high as possible (Reference Espeland and SauderEspeland & Sauder 2004). This emphasis on “the numbers” rather than educational quality is perhaps most striking in the strategies that schools have adopted to game the rankings. Writing as the acting president of the AALS, Reference WhitmanWhitman (2002) lists the following strategies: encouraging underqualified applicants to apply in order to raise selectivity ratios, “skimming” top students from other schools to keep entering first-year cohorts small (again raising the selectivity ratio), admitting students with higher LSAT scores over students who are otherwise better qualified and a better fit, and temporarily hiring unemployed graduates to boost employment statistics.

These effects of rankings would be far less problematic to opponents of the rankings if an improvement in rank always corresponded to an improvement in educational quality—that is, if the signal produced by the rankings was tightly coupled to the phenomenon it was measuring. But these critics argue that there is a disconnect between rankings and quality, a disconnect that stems from at least two sources. First, opponents of the rankings argue that they are methodologically inaccurate signals of the phenomenon they purport to measure; that is, the USN rankings do not effectively measure law school quality. Second, independent of measurement concerns, critics contend that the process of measurement itself provides a distorted representation of law school quality by amplifying insignificant differences between schools and creating differences where none existed before. In other words, rather than simply reflecting the distribution of quality among law schools, the signal produced by the rankings distorts this distribution and in doing so reshapes the reputational terrain of law schools.

Methodological Inaccuracy

USN creates precise relative comparisons among law schools by employing a formula that combines four primary measurements of law school quality—peer and professional assessment (40 percent of total), selectivity (25 percent), placement success (20 percent), and faculty resources (15 percent)—to generate an overall reputational rank for every law school accredited by the American Bar Association (ABA) (see the Appendix for a complete description of these factors). Although the way in which USN has made distinctions between schools has fluctuated somewhat over time, from 1993 until 2003 the basic structure of these rankings consisted of an ordinal ranking of the top 50 law schools and a division of the remaining (approximately 130) schools into tiers (second tier, third tier, fourth tier), in which they were listed alphabetically. Using this formula, USN creates a point total for each school, which is then standardized to give the top school a final score of 100 and the remaining schools a score based on the percentage of their total points compared to the total points of the top school. So, for example, if Yale receives 22 total points and the University of Alabama receives 10.4, then Yale's score is standardized to 100 and the University of Alabama's score is standardized to 47.3 (100 × 10.4/22). Schools are then sorted by their standardized scores and given a numerical rank if they rank in the top 50, or they are equally divided into one of three tiers if they do not.

One criticism of this methodology—that is, the accuracy of the signal produced by USN—is that it fails to take into account many of the attributes that constitute a quality law school. A prominent example of this opinion is a letter of protest written by the LSAC and signed by the majority of law school deans. Since 1997, this letter has been sent each year to all students who register to take the Law School Admissions Test (LSAT). The LSAC letter questions the quality of information provided by USN, alleging that the rankings cannot take each student's “special needs and circumstances into account,” and that they fail to measure many factors that students claim are most important in their choice of law school, including measures for the quality and accessibility of teachers, faculty scholarship, and racial and gender diversity within the faculty and student body.Footnote 7

Leaving aside what USN does not measure, other analysts have found that the measures that are used by USN to estimate law school quality are bad proxies for the actual quality of these schools. Reference Klein and HamiltonKlein and Hamilton (1998), for example, conclude that 90 percent of the overall differences in ranks among schools can be explained solely by the median LSAT score of their entering class; this finding suggests that despite their stated weights, the numerous other factors that make up the rankings have little effect on a school's overall rank. In a similar vein, Reference LempertLempert (2002) characterizes the USN rankings as “pseudoscience” and, examining each component carefully, characterizes every factor used by USN as deeply flawed.

Finally, critics argue that the signal of law school quality provided by the rankings is largely a product of the weights given to each factor included in the formula used by USN. There is no inherent justification, for example, for making reputation 40 percent of law school quality, faculty resources 15 percent, or volumes in the library 0.75 percent; these weights were invented by the rankers. Although Reference Klein and HamiltonKlein and Hamilton (1998) minimize the importance of these weights, their influence is convincingly demonstrated in a Web site developed by Jeffrey Stake that allows students to determine the weight of each factor according to their own preferences.Footnote 8 As Stake's “Ranking Game” shows, even small changes in the relative weights of these variables can make substantial differences in the rank ordering of schools. This suggests that methodological decisions play a crucial role in determining the signal created by USN.

Signal Distortion

Putting aside the accuracy with which USN measures law school quality, another line of criticism of the rankings contends that the process of creating this signal is itself a source of distortion. Recent sociological and historical work on commensuration (Reference EspelandEspeland 1998; Reference Espeland and StevensEspeland & Stevens 1998) and quantitative authority (Reference PorterPorter 1995), for example, convincingly argues that quantification is a unique type of signal, one that often alters the phenomena that it is representing. Opponents of rankings make similar claims about the signals produced by USN, arguing that by quantifying law school quality very precisely and creating hard lines of distinctions (the cut points between tiers, for instance) the rankings both amplify small differences between schools and create differences among schools that did not exist previously. That is, independent of what information is signaled by USN, how this information is signaled is a second important source of distortion.

Documenting this claim involves looking closely at how USN signals law school quality to its audiences, examining how the actual distribution of law school quality compares to the representation of this distribution put forth by USN. As we show below, even when USN's own definition of law school quality is used as a proxy for actual law school quality, there are significant differences between the actual distribution of law school quality and how this distribution is signaled by USN.

Using the information provided by USN in the 2000 through 2003 rankings, we were able to produce accurate estimates (R2>0.98) of the distribution of law school quality—again, as defined by USN—for all ranked law schools during this period.Footnote 9Figure 1 displays the density plot of the standardized score for the years 2000 through 2003, demonstrating that the underlying distribution of quality closely approximates a normal distribution.

Figure 1. Distribution of USN Quality Scores 2000–2003.

However, if we look at how this approximately normal distribution is represented by the USN rankings, we see a different picture. Figure 2 shows how USN creates a very different distribution of schools than the distribution based on their own algorithm. Whereas Figure 1 shows that the standardized scores of law schools are very similar to one another near the center of the distribution, we can see in Figure 2 that this is where USN makes the decisive breaks between schools in the top tier, the second tier, and the third tier. In other words, very small differences at the center of the distribution can lead schools to change their rank or tier, since there is very little that separates them from the other schools. In addition, USN's representation of the distribution portrays schools within the second-, third-, and fourth-tier schools as being of equal quality and makes it appear as though there are large gaps between tiers even though the underlying distribution is continuous. Furthermore, the numerically ranked schools extend the high end of the distribution, creating large distinctions between schools. While this might be appropriate at the extreme of the distribution, where schools are separated by relatively large values on the underlying quality dimension, it is far less accurate toward the center of the distribution, where schools become very similar to one another—but this is precisely where USN creates its most precise distinctions.

Figure 2. Comparison of Distributions.

One of the consequences of this shift in the distribution is that there is a lot of fluctuation in the ranks of schools due to very small and statistically insignificant changes in their scores or the scores of the schools near them in the rankings. This applies to all schools ranked in the top 50 because the distinctions made by USN between these schools are so fine. As shown in Table 1, the top 50 ranked schools are more likely to experience a change in the rankings from the previous year than they are to remain the same.

Table 1. Number of schools that have changed tier or rank, 1994–2003

Overall, in each year roughly 68 percent of schools experience a change in rank from the previous year. This is largely because USN makes very fine distinctions among the top 50 schools even though the quality measures of these schools form a continuous normal distribution where schools toward the center of the distribution have very similar scores; in other words, the scores of these schools—especially those close to the center of the distribution—are very similar, so very small changes in their scores can have disproportionate effects on their rank.

Similarly, small differences can become even more magnified at the margins of tiers. Again, although law school quality as measured by USN takes the shape of a normal distribution, the presentation of this distribution by USN implies a meaningful and qualitative difference between second-tier and third-tier law schools. While these tier changes are less common—each year a little more than 80 percent of the schools remain in the same tier as they were in the previous year—during our analysis period only 29 of the 130 schools that were not ranked in the top 50 throughout this period did not experience a change in tier (18 of these 29 schools remained in the fourth tier during this time).

Signal distortion of this type is very important to law schools because they believe that small differences in rank are meaningful to their outside constituencies: it matters whether a school is 9 or 12, 23 or 28, second-tier or third-tier because, they believe, these statistically insignificant differences have a significant influence on how outside constituencies perceive and behave toward law schools (Reference Espeland and SauderEspeland & Sauder 2004; Reference Sauder and EspelandSauder & Espeland 2005). Therefore, law schools are pressured to continually optimize their ranking, by, for example, basing admissions decisions on LSAT scores, spending more money on merit-based scholarships in order to “buy” students whose LSAT scores will raise the school's median, or producing expensive glossy brochures to be sent to those who fill out the USN survey.

The belief that small differences matter to outside constituencies explains why law schools pay such close attention to the rankings even if they believe they are inaccurate. Interviews with law school administrators indicate that schools employ the aforementioned strategies to improve their rank because they perceive that even changes in rank caused by normal statistical fluctuations can have considerable effects on the actual quality of the school because these changes are viewed by external audiences as real changes in quality. As one dean explained,

You have people who focus on whether or not the rankings are in fact valid, whether they really show anything, whether the methodology is good, and so on. And those debates can seem endless at times as everybody kind of decries the rankings. On the flip side you have the pragmatic reality of the rankings…. Whatever the validity of the methodology, it's difficult to pretend that the rankings don't matter. I mean prospective students use them; employers use them; university administrators use them. So whether we in legal academics think they're valid or not, whether they're reflective or not, the truth is that I don't think you can just ignore them.

The “pragmatic reality” of the rankings—the belief that outside constituencies make decisions based on where schools rank—is the primary driving force behind the influence of the rankings on legal education. In other words, law school rankings are consequential, regardless of their validity, because prospective students decide which schools to attend, employers decide whom to hire, and alumni decide how much to give to their alma maters based on where a school stands in the most current USN ranking.

But the pragmatic reality of the rankings is not limited to the one-time fluctuations a school might experience. Many administrators believe that the distorting effects caused by the precise evaluations made by the rankings can have long-term consequences as the next cohort of prospective students responds to the new rankings of the school. For example, the fear exists that a fall in the rankings will have a spiraling negative effect in which this drop in rank will lead to a negative response from these outside audiences (for example, a lower-quality applicant pool), which will in turn lead to an even worse rank.Footnote 10 As one administrator told us,

Ever since [a fall from the second to the third tier] we've been scrambling to try to lift our rankings because it's a vicious cycle. Students say, “Well, why should I come to you? You're in the third tier.” Employers say, “Why should I hire your people? You're in the third tier.” So we get less-good students. So it just circulates.

The existence of such a spiral, negative or positive, would indicate an important way in which small changes in rank can become magnified to have extensive and long-lasting consequences for law schools. In addition, because these consequences of rankings could often be the result of the vagaries of the methodology employed by USN rather than any real change in educational quality, the existence of a spiral would also exemplify how the signal produced by the rankings can create new forms of inequality rather than simply reflect preexisting inequalities among schools. In this situation, the dangers of a loose coupling between a signal and what it is representing (Reference PodolnyPodolny 1993) become clear: the signal not only might misrepresent the phenomenon it is measuring, but it could also reify this misrepresentation as future actors base their actions or decisions on this signal.

Evidence of the Effects of Rankings

As proponents of the rankings have noted (see Reference BergerBerger 2001), it is striking that despite the debate and concern surrounding the rankings, there has been little attempt to systematically test these wide-ranging claims about their effects. Reference SchmalbeckSchmalbeck's (2001) study of measures of reputational standing over time is the only empirically grounded study of the effects of law school rankings to date, and this study provides little support for those who claim that the rankings have significant consequences. Schmalbeck finds that the reputations of law schools are relatively durable and that rankings have done little to change perceptions of school quality among those who fill out the USN survey: a drop in rank during one year, for instance, does not have a negative effect on a school's reputation score in the following year's survey.

Schmalbeck's study, however, does not address the effects of rankings on external constituencies, the audiences about which law school administrators are most concerned. And while no empirical research on this question has examined this issue in the field of legal education, analyses of the effects of rankings on other types of educational institutions lend justification to administrators' concerns. In their investigation of how prospective undergraduates use rankings, for example, Reference McDonoughMcDonough et al. (1997) find that rankings intensify the “reputation game” played by colleges by focusing the attention of prospective students on the purported prestige of schools rather than on the fit between the school and the particular student's interests and needs. Likewise, Reference Monks and EhrenbergMonks and Ehrenberg's (1999) study of elite colleges shows that movement in rank affects the number of applicants these colleges receive, their selectivity in admissions, their yield rate, and how they deploy scholarship money. Finally, Reference Elsbach and KramerElsbach and Kramer (1996) find that even small changes in business school rankings evoke identity crises within these organizations.

These studies highlight the fact that the crucial question in the debate about law school rankings has yet to be addressed empirically: do the rankings affect the behavior of external audiences? Or, in the language of signaling theory, do the signals produced by USN influence the behavior of external audiences?

To answer this question, the present study examines the effects that a school's rank has on what many consider the most important of these constituencies: prospective students. If students perceive the rankings to be a useful signal of law school quality, then they should respond to a school's rank independently of that school's other characteristics. Students will be more likely to both apply to higher-ranked schools and matriculate at these schools because these schools possess signals of higher quality. Similarly, after controlling for other school characteristics, students with better LSAT scores will be more likely to apply to schools toward the top of the rankings, while students with lower LSAT scores will be more likely to apply to schools with a lower ranking.

Hypothesis 1: Controlling for school characteristics, the USN ranks will have an independent effect on students' decisions in the admissions process.

In light of the pragmatic reality of the rankings, we next examine whether schools alter their admissions activity in anticipation of the effects that they believe correspond to rankings. If students do use the rankings as a signal for law school quality and make their decisions about which schools to apply to and where they accept offers from, then we expect that schools will also be influenced by the rankings in their own admissions decisions. That is, schools will accept a greater number of students as they decrease in the rankings because they expect fewer of their accepted applicants to actually matriculate. In addition, they might modify their tuition in response to the rankings—lowering tuition to make their school more appealing if they have a low rank or raising tuition to increase revenue if they are ranked high.

Hypothesis 2: Controlling for school characteristics, the USN ranks will have an independent effect on schools' decisions in the admissions process.

If the rankings do act as a signal and affect the decisions of students and schools in the admissions process, then there is also the possibility that changes in rank will actually affect the quality of schools. The factors that USN uses to create its rankings are precisely those factors that are affected by rank. Therefore, we would expect that the effects of the USN rankings on decisions will in turn affect future USN rankings.

Hypothesis 3: Student and school decisions in response to the USN ranks will have a significant effect on future USN ranks, net of previous ranks.

Data and Methods

To test the effects of the USN rankings on prospective students and law schools, we have collected data from two primary sources. Data on rankings were collected from the 1993 to 2003 editions of the U.S. News and World Report Guide to Graduate Schools. During this period, USN ranked the top 50 schools numerically and then grouped the remaining schools into three tiers within which their placement was determined alphabetically. In order to measure school characteristics that were not used in the construction of the USN scores, we collected data from the 1996 to 2003 editions of The Official Guide to ABA-Approved U.S. Law Schools, now jointly published by the LSAC and the ABA. The Official Guide to ABA-Approved U.S. Law Schools is designed to provide prospective students with both a qualitative description of all accredited law schools in the United States and a wide variety of quantitative characteristics (e.g., school size, minority composition, volumes in the library, and a chart of the previous year's applicants' chances of admission based on their LSAT scores and GPAs) of these same schools.Footnote 11 Prior to the 1996 edition, these data were published in different forms, preventing us from extending the analysis further into the past.

Dependent Variables

We tested the effects that rankings have on a number of outcomes that are consequential for law schools. We began by examining the effects of rank on three variables that reflect student decisions: how many students apply, the percentage of all students with top scores who apply, and the percentage of admitted applicants who matriculate. We operationalized these variables as follows. First, we took number of applications directly from the school-by-school data reported in The Official Guide to ABA-Approved U.S. Law Schools. Second, we measured the percentage of top students by calculating the portion of the applicant pool with LSAT scores that fell within particular ranges: 160 and above, 150 to 159, and 120 to 149. Because the current edition of the rankings used by students who are applying to law school in any given year is published two years before the corresponding school-by-school data are published in The Official Guide to ABA-Approved U.S. Law Schools, we applied a two-year lag to both of these variables.

Finally, we calculated percent matriculated by dividing the total number of students who were accepted by each school by the number of students who chose to attend. Because rankings have a more immediate impact on matriculation—the USN rankings are published in March, and most students make decisions about which school to attend in April—we implemented a one-year lag for this variable.

To determine if schools alter their behavior in response to changes in their rank, we next analyzed two variables that speak to this issue. We measured percentage of applicants accepted by dividing the total applicant pool by the number of applicants accepted by the school, while we measured tuition by the annual tuition costs of each school.

USN Ranks

We calculated the effects of rankings in two ways. USN groups law schools into four broad categories, or tiers, based on the scores determined by a formula that combines measures of reputation, selectivity, employment success, and institutional resources. Schools that score in the top 50 of all law schoolsFootnote 12 are placed in the first tier and given a numerical rank based on their position within the tier. For example, Yale University Law School had the highest raw score in each of the years we consider here. This placed Yale in the top tier, or tier 1, and within that tier Yale was ranked number 1. The second, third, and fourth tiers are determined by their raw score where, once the top 50 schools are removed, the top 33.3 percent of the remaining schools are placed in the second tier, the middle 33.3 percent in the third tier, and the bottom 33.3 percent in the fourth tier.

We first measured the effects of tier placement. Because schools in tiers 2, 3, and 4 are not assigned numerical ranks, this is the only level of differentiation for most schools. To test for tier effects, we constructed dummy variables for each tier, using tier 4 (low) as the reference category. That is, we created dichotomous (0/1) variables for tier 1, tier 2, and tier 3. We measured the coefficients and levels of significance relative to tier 4, so that the coefficients for the effects of tier were to be interpreted as the net effect relative to tier 4.

Second, we assessed the effects of the specific numerical ranks assigned to schools in the first tier. We measured rank by using the USN ranks. Since only those schools in the first tier have numerical ranks, we did not include schools in the other tiers in our analyses of numerical rank. We reverse-coded the numerical rank of schools in order to simplify interpretation. That is, we gave Yale Law School, which has always been ranked the top school in the country, a rank of 50, while we gave the bottom school in the first tier a rank of 1. Thus higher-ranked schools had a higher number than lower-ranked schools.

Controls

We also included a number of control variables. Most important, we employed a fixed-effects panel regression, which allowed us to include a separate dummy variable for each school. This provided us with the ability to control for all of the unique time-invariant school characteristics (such as reputation, whether the school is public or private, if the school is independent or attached to a university, etc.) that our other variables did not account for. For example, if a school has a specialty program (for instance, tax law) that attracts a disproportionate number of applications from top students, our fixed-effect model allowed us to control for this school-specific factor if it did not change during our analysis period. In other words, this model specification permitted us to control for a broad range of factors that we could not measure directly.

In addition to using a separate dummy variable for each school, we included several other controls. We incorporated dummy variables for year from 1996 to 2003, with 1996 as the reference category. We also controlled for the size of each school, measured by the number of law students at the school, and employed a log transformation to normalize this distribution. We ran the models with each of these variables, and they did not affect our coefficients for our measure of USN rank. Finally, we tested for the effects of age of the school and whether the school is public or private, but—because there two variables were highly collinear with size—we did not include them in the final analyses.

Model

In order to test hypotheses one and two, we ran a pooled cross-section fixed-effects Prais-Winsten regression. This model has four distinct strengths. First, as we explain above, the fixed-effect allowed us to control for time-invariant school characteristics. Second, pooling multiple years allowed us to examine schools over time. Third, because we had multiple observations of each school, our cases were not independent of one another; the fixed effects model, however, allowed us control for unmeasured school characteristics as well as the nonindependence of observations. Finally, the Prais-Winsten autoregressive function allowed us to control for serial correlation within schools over time (Reference GreeneGreene 2000).

In order to test hypothesis 3, we employed two different models. First, to determine what predicts which tier a school falls in, we used an ordered probit model because it uses an ordinal variable—such as tier—as its dependent variable. Although this model did not allow us to use a fixed-effects term, we did control for previous tier, which captured much of the same variation. As shown in Figure 1, the underlying distribution of quality is a normal distribution; this allowed us to use an ordered probit, which assumes an ordinal division of a normal distribution. In order to predict the rank of a school, we used a random-effects pooled cross-sectional regression with a Prais-Winsten correction for serial correlation. This is largely the same model as we used to test hypotheses 1 and 2, without the fixed-effects term. We removed the fixed-effects term because it is closely correlated with previous rank, which is of theoretical interest for this hypothesis.

Results

Number of Applications

The first two columns of Table 2 show the results of our tests on the effects that rank has on the total number of applications that schools receive. The first column presents our results for the effects of tier differences. Although our results show that there are not significant differences in the number of applications received by schools in the second, third, and fourth tiers, we find that schools in the first tier receive a much higher number of applications—on average these schools receive 177 more applications than schools in the fourth tier and 125 more than schools in the second tier. In addition, our dummy variables for year, which are all significant at p<0.01, yield no surprises: we find that there was a dramatic drop in the total number of applications from 1996 to 2000, which was followed by an overall increase in applications from 2000 to 2003 (though still below 1996 levels), and that larger schools receive a greater number of applications.

Table 2. Pooled Cross-Section Fixed-Effects Prais-Winsten Regression of the Effects of USN Ranks on Student Decisions

* p < 0.05, **p < 0.01.

Note: All models include dummy variables for each school. These were omitted for the sake of brevity. Standard errors are in parentheses.

a The fourth tier was used as the reference category.

b 1996 was the reference category for all models except for the effects of numerical ranks for total applications, where 1997 was the reference category.

The second column of Table 2 presents our results for the top 50 schools with numerical ranks. Here, we find that schools with higher ranks have a statistically significant higher number of applications. Controlling for year, size, and our fixed-effect term for each school, we find that each rank a school increases leads to an increase of nearly 19 applications. Although our results show a similar pattern in the overall number of applications, we do not find a significant effect for the size of the school.

Applications by LSAT Score

Columns three through eight in Table 2 demonstrate our results for the percentage of a school's applicant pool with LSAT scores in the following ranges: 160 and above, 150 to 159, and 120 to 149. Our data show a slight decrease over time in the percentage of applicants with LSAT scores above 160 and between 150 and 159 and a concurrent rise in the percentage of applicants with LSATs between 120 and 150; this suggests that more students with lower LSAT scores applied to law school between 1996 and 2003. The effects of USN ranks on these different types of applicants are relatively straightforward. Schools in the top two tiers have a higher percentage of their applicant pool with LSAT scores above 160, compared to schools in the third and fourth tier. Similarly, schools in the top two tiers have a significantly lower percentage of their applicant pool with LSATs between 120 and 150. The results for percentage of applicants with LSAT scores between 150 and 160 are mixed: schools in the second tier have a significantly higher percentage of students with these LSAT scores.

Not surprisingly, our results indicate that students with very high LSAT scores tend to apply to top schools. Students with LSAT scores above 150 are more likely to apply to second-tier schools than schools in other tiers, which indicates that they realize that these are the schools into which they have the best chance of being accepted. Similarly, students with LSAT scores below 150 are much less likely to apply to schools in the top two tiers, where they have little chance of being accepted.

Our results for numerical rank are rather mixed. We find that for schools in the first tier, there is a significant (p<0.01) effect for rank on applicants with LSAT scores of 160 and above; an increase in one rank leads to a small increase in the number of top applicants, on the order of 0.13%. These results suggest that top applicants do pay attention to minor differences in rank. However, we do not find that rank has significant effects for the percentage of students with LSATs between 150 and 159 or between 120 and 149. This suggests that students with LSATs below 160 apply to top-ranked schools at a relatively constant rate and do not strongly differentiate between schools in the top tier, where they have significantly lower chances of being accepted.

Matriculation

The final two columns in Table 2 present our results for the percentage of accepted students who matriculate. We find that a higher percentage of students matriculate at larger schools; we hypothesize that this is because larger schools are more likely to be public schools with lower tuition costs. With respect to the USN rank, we find that schools in the top three tiers have on average a 2 percent increase in the percentage of their students who matriculate, controlling for other factors. However, the effects are not significant for the top tier, even though it has the largest coefficient. This indicates a significant amount of variation in the percentage of accepted students who matriculate in the top tier. As we can see in the final column of Table 2, this is largely because the effect of numerical rank is strong and statistically significant (p<0.01); that is, each numerical rank increase leads to a 0.18 percent increase in the percentage of students who matriculate.

Table 3 presents our results for the responses of schools to the USN ranks. The first two columns show the results of our regressions of USN rank on the percentage of applicants accepted by schools. We find a general increase in the percentage of applicants accepted over time, closely following the decrease in applications shown in Table 2. In addition, we find that larger schools are more likely to accept applicants than are smaller schools. The effects of USN ranks are not significant when schools are ranked by tier, which indicates that the different tiers have relatively stable acceptance rates. However, column 2 in Table 3 indicates that within the first tier, schools with higher ranks accept a smaller percentage of their applicants. An increase of one rank leads to a 0.2% reduction in the percentage of students accepted (p<0.01).

Table 3. Pooled Cross-Sectional Random-Effects Prais-Winsten Regressions of the Effects of USN Rank on School Decisions

* p < 0.05, **p < 0.01.

Note: All models include dummy variables for each school. These were omitted for the sake of brevity. Standard errors are in parentheses.

a The fourth tier was used as the reference category.

b 1996 was the reference category for all models except for the effects of numerical ranks for total applications, where 1997 was the reference category.

Our results for tuition are presented in columns 3 to 6 in Table 3. Overall, we find a general increase in both in-state and out-of-state tuition over this period. However, we find that size itself has no effect, indicating that size is not a driver of costs. The effects of the USN ranks on tuition are weak, with the only significant effect being that third-tier schools have significantly lower tuition (p<0.05 for both in-state and out-of-state) than do fourth-tier schools. While the effect is not large, in that schools in the third tier charge $200 less than fourth-tier schools, it is surprising.

Feedback Effects

Table 4 presents the results of our tests of hypothesis 3. The first two columns present our ordered probit regressions of school characteristics and previous tier on the current tier of schools. Column 1 presents the results with only controls for size and previous tier. Not surprisingly, we find that previous tier is a very strong predictor of current tier; this confirms what we found in Table 1, that schools do not change tier frequently. Our coefficients for tier are all significant at p<0.01, and—because each coefficient sits squarely in the middle of each tier's range—they are robust predictors of current tier. Column 2 of Table 4 presents our results when we include the effects of student decisions from the previous year. Here we find that tier is still significant at p<0.01. The effects of decisions are rather minimal, with the only significant effect being the percentage of students with LSATs above 160.

Table 4. Ordered Logit and Random-Effects Regression of the Effects of Student and School Decisions on USN Rank

* p < 0.05, **p < 0.01.

Note: For the regressions on tier, we used an ordered probit model, while for the regressions on rank, we used a Prais-Winsten random-effects regression.

a Tier 4 is the reference category.

Columns 3 and 4 of Table 4 present the results of our regression of previous rank, school decisions, and student decisions on current rank for the top 50 ranked schools. Column 3 presents our model without student decisions, and we find that rank is a very strong predictor (p<0.01) and that the coefficient is nearly 1. This suggests that previous rank is a very strong predictor of current rank. When we include the effects of student and school decisions in our model (presented in column 4), we find that the effect of rank remains significant, though the coefficient is smaller than in the model in column 3. As in the models for tier, we find that the only consequential effect of decisions is the percentage of the applicants with LSATs above 160. Here we find each one-point increase in the percentage of applicants with LSATs above 160 increases a school's rank by a half.

Discussion

Consistent with our hypotheses, we find that the USN ranks act as a signal to law school applicants. Independent of school characteristics, we find that these ranks affect how many students apply to a school, how many of those applicants have exceptionally high LSAT scores, the percentage of applicants who are accepted, and the percentage of accepted students who matriculate. In short, the USN rankings have a significant impact on the admissions process in law schools. Furthermore, these effects tend to be stronger for the top schools that are ranked numerically than for the majority of schools that are ranked by tier.

The grouping of schools into tiers tends to segment the market for law school admissions. The only tier effects for the total number of applications is that the numerically ranked schools (the top 50) tend to receive, on average, about 180 more applications than other schools. The status that accompanies a top-tier ranking is a boon for applications. However, when we examine the effects of numerical ranks within the top tier, we find that each rank increases the applicant pool by nearly 19 applications. While not by itself very much, this indicates that a difference of 10 ranks within the first tier yields roughly the same number of additional applications as does the difference between schools in the fourth tier and those in the first tier.

We see a similar pattern in the percentage of applicants with LSAT scores above 160. Compared to schools in the third and fourth tiers, schools in the second tier attract 1.3 percent more of these applicants, while schools in the top tier attract 2.6 percent more. Within the top tier, each rank increases the percentage by 0.13 percent, where a difference of 10 ranking positions equals the difference between second- and fourth-tier schools, while a 20-rank difference equals the difference between first- and fourth-tier schools. The percentage of students with LSAT scores in the 150 to 159 range is highest in second-tier schools, and there are actually negative effects for being in the top two tiers for the percentage of applicants with LSAT scores in the 120 to 149 range. We interpret this finding to mean that applicants use the USN tiers to match themselves to schools based on their own LSAT scores. That is, students with high LSAT scores are more likely to apply to the top-ranked schools, while students with lower scores avoid the top-ranked schools in favor of the lower-ranked schools. This strongly suggests that the USN ranks help define how this market is segmented.

USN ranks also affect the percentage of accepted students who matriculate in the law school. Approximately 2 percent more accepted students choose to attend schools in the second and third tiers each year than do accepted students in the bottom tier. While the effect for first-tier schools is of similar magnitude, it fails to attain significance. However, within the first tier, each rank change increases the matriculation rate by 0.2 percent, so that a difference of 10 ranks changes the percentage roughly the same amount as the difference in matriculation rate between the bottom tier and the other tiers. Finally, the USN ranks have minor effects on the ability of schools to be more selective. While there are no significant effects for the tiers, schools within the first tier can be more selective the higher their rank.

Overall, the USN ranks have a consistent and independent impact on which schools students apply to, where they can hope to be accepted, and where they eventually matriculate. But, as shown above, this is not the full extent of the effects of the rankings. The responses of students to the signals produced by the USN rankings can affect the future rank of schools in a way that compounds the initial effects of rank. The strongest effect of USN rank in our model is its influence on the schools to which top students apply, a variable that we also find to be a strong predictor of future rank. In other words, the fluctuations in the ranks of schools—which are commonplace due to the precise distinctions made in the rankings and are rarely anything more than random statistical variation—have a clear influence on where students with high LSAT scores apply, which in turn affects a school's future ranking. Moreover, this process affects the underlying quality of schools, as schools that drop in the rankings are unable to recruit as talented students as they could before. This spiraling effect highlights the importance of LSAT scores in determining the rank of schools and provides support for Reference Klein and HamiltonKlein and Hamilton's (1998) emphasis on the importance of LSAT scores to the determination of USN rank.

Two examples of the effects of rankings on actual schools provide concrete demonstrations of the somewhat abstract implications of our predictive models. The University of Akron School of Law moved from the fourth tier in 1999 to the third tier in 2000. In the following year, Akron received 88 additional applications (1,150, up from 1,062), an increase of 8 percent compared to an average increase of 5 percent for those schools that remained in the fourth tier in 2000. In addition, it decreased its percentage of acceptance offers from 48 to 43 percent while maintaining the same number of students who matriculated (199 in 2000 to 201 in 2001), resulting in an increase in its matriculation rate from 40 to 44 percent.

Within the top tier, the University of Wisconsin suffered a drop from 23 in the 1996 rankings to 43 in 1997, and the subsequent year's applicant pool decreased by nearly 20 percent (from 1,915 to 1,536), compared to an average decrease of 9 percent for all schools in the top tier in both 1996 and 1997. In addition, this drop in the rankings also affected its ability to recruit top applicants. After dropping to 43, the percentage of applicants with LSATs above 160 went from 32 to 29 percent; combined with the drop in total admissions, this translated to the loss of 170 applicants with LSATs above 160 (from 614 to 444).

Conclusion

In this article, we have demonstrated that the USN rankings provide a signal of law school quality that influences the behavior of both outside audiences and law schools. While the effects of these rankings on admissions have been the subject of much speculation in the law school community, this is the first study to empirically examine these effects in detail. We find that these effects are both statistically significant and substantively meaningful to the affected schools. Even if their magnitude is not immense—20 applicants here, a few matriculation points there—these are actual changes to student body quality that are recognized as important by law school administrators and thus, as we discuss below, prompt many secondary effects of rankings—such as increases in marketing activity, attention to LSAT scores, and career services statistics—that magnify their influence.

When considering the impact that these signals have had on law schools, it is also important to keep in mind that admissions is just one of the many aspects of legal education that are affected by the USN rankings. Administrators note that the rankings also affect how other outside constituencies—most important, employers, alumni, and university trustees—perceive and behave toward the school. As one dean explained,

The law school faculties and the smart administrators all say, “This [the rankings] is a bunch of hooey, we don't care about this,” until they drop and the board of trustees says, “Hey, you're dropping; why should we give you more money?” And the board of visitors from the law school say, “Man, your school's really going to pot and you haven't changed a thing…. Big changes need to be made here.” And your monetary support—the alumni—say, “Well I'm not sure I want to support a school that's going in the wrong direction.” And your money starts to dry up, and you go “We have got to have the money; we can't afford to lose funding or else it will spiral downhill and we will be a worse law school.”

In addition, many administrators note that internal constituencies such as current students, faculty, and even members of the administration itself are affected by changes in rank; among the manifestations of these effects are morale changes, transfers, changes in the ability to attract new faculty, and an increase or decrease in job security for administrators. While these effects fall outside the scope of our current study, they suggest that the admissions process is just one of many nodes within the institution that are affected by the rankings, and a valuable line of future research would be to examine if and how the behavior of these constituencies is influenced by the USN rankings.

In this article, we have also demonstrated how the process of creating market signals can have unintended effects on the phenomenon that they are designed to simply measure or represent. While economic theories of signaling focus almost exclusively on what signals do (e.g., provide consumer information), sociologists have tended to point out the limitations of these signals, such as the disjuncture between what is signaled and the reality of what is being represented (Reference PodolnyPodolny 1993) and the influence of the signal independent of the phenomenon it is measuring (Reference Benjamin and PodolnyBenjamin & Podolny 1999; Reference Uzzi and LancasterUzzi & Lancaster 2004). Drawing on the insights of research on the effects of quantification (Reference PorterPorter 1995; Reference EspelandEspeland 1998; Reference Espeland and StevensEspeland & Stevens 1998), the present findings extend the sociological line of critique by demonstrating how the process of signaling itself—that is, how the signals are presented—can distort in consequential ways that which is being signaled, regardless of its methodological accuracy. We suggest two ways in which the signal created by USN has distorted law school quality.

First, by precisely quantifying the quality of each law school and then creating rigid and fine-grained distinctions between schools, USN misrepresents the actual distribution of law school quality even if its own measure of quality is accepted as accurate. This means that many of the exacting distinctions made by USN, especially those that are made toward the center of the distribution of law schools, do not indicate actual differences in law school quality. As our results demonstrate, however, this false precision has significant consequences for law schools because small differences in law school rank appear significant to influential outside audiences.

The effect of these small differences supports the views of those administrators who claim that small changes, changes that are often caused by random fluctuations in the statistical measurements used by USN, can have important consequences for the school by influencing the quality of the student body. In this light, the redistribution of resources in which many schools engage to maintain or raise their ranking is a rational, if unfortunate, strategy. Because a decrease in ranking can do real damage to a school, administrators often feel obliged to prevent such a fall; one dean expressed this well:

It would be stupid in a competitive environment not to do the things that are better for the USN, if it could ultimately lead you to getting worse students overall. So the cost-benefits of making decisions cannot be done without considering what the external effect may be. I mean, I care about rankings because they hurt us if we don't get good rankings. I want to have a better ranking because it means that we'll have better students and they'll have more opportunity.

Second, the distortion created by these presentational choices can then be compounded as future decisions are made according to these signals. Our analysis shows how the effects of the rankings on the admissions process have the potential to create a feedback loop that appreciably increases the magnitude of these consequences for institutions that experience changes in rank. This spiraling effect would unfold as follows: a school at the cusp between tiers experiences a statistically insignificant change in its numerical rank that moves it from one tier to another; then, the benefits or detriments that the school experiences because of this change will push that school closer to the mean for that tier; finally, this movement toward the mean will solidify the school's position in the new tier. In this way, the consequences of a change in rank extend beyond the following year, and these compounding effects can have long-term effects on the quality of students the school can attract and, thus, the quality of the school. This is a case where the rankings, by transforming insignificant variations into significant consequences, play a clear role in creating—rather than simply reflecting—law school quality. While it is true that the rankings, as their advocates contend, provide useful and accessible information to prospective students and other audiences, our findings suggest a more careful consideration of the unintended, and sometimes unnoticed, consequences that these evaluations produce.

Appendix: Methodology of USN Law School Rankings (2003)

Footnotes

This article was originally presented at the 2003 annual meeting of the Law and Society Association and received the 2004 ASA Sociology of Law Section Award for best graduate student paper. We thank Steve Demuth, Wendy Espeland, Gary Gephart, Robert Nelson, James Oldroyd, and Arthur Stinchcombe for their insightful comments on previous drafts; we also thank the anonymous reviewers at LSR and Herbert M. Kritzer for their suggestions and guidance. The research for this article was supported by funding from the Law School Admissions Council (LSAC). The opinions and conclusions contained in this report are those of the authors and do not necessarily reflect the position or policy of the LSAC.

1 Rankers of note in the United States include Atlantic Monthly, Business Week, Financial Times, U.S. News & World Report, and Wall Street Journal. Internationally, prominent rankings of universities are published by magazines in, for example, Australia (Australian Good University Guide), Canada (Macleans), Germany (Der Spiegel, Stern, Focus), the United Kingdom (Times Higher Education Supplement), and Asia (Asiaweek); there are also several rankings of institutions worldwide (e.g., in Asiaweek and Times Higher Education Supplement). This list does not include the multitude of rankings published by academics or institutes across disciplines and throughout the world.

2 Reference Caron and GelyCaron and Gely (2004) write, for example, “A tsunami of accountability and transparency is sweeping across American law and society. One manifestation is the insatiable public demand for ever more and increasingly sophisticated rankings in all aspects of American life” (2004:1553).

3 The rankings have prompted official responses from, for example, the Law School Admissions Council (LSAC), the American Association of Law Schools (AALS), and the National Association of Law Placement (NALP).

4 See Reference McDonoughMcDonough et al. (1997) and Reference Monks and EhrenbergMonks and Ehrenberg (1999) for evidence that the rankings of undergraduate institutions affect the perceptions and decisions of prospective students.

5 Over the past two decades, for example, there has been an enormous increase in the role of rankings in legal practice. In the United States, law firms are ranked by subjective ratings of other lawyers (The Best Lawyers in America), size (The National Law Journal 250 and the Of Counsel 500), revenues (The AmLaw 100), the amount of pro bono work performed by their attorneys (The American Lawyer), and their commercial activity (The American Lawyer Corporate Scorecard). And this is not just an American phenomenon. Legal newspapers in Europe, such as Chambers & Partners and The Lawyer 100 in the United Kingdom and Décideurs Juridiques et Financiers in France, also rank law firms, and The Lawyer ranks international firms in terms of the amount of business they do in the EU in The Lawyer Euro100.

6 All quotations and statements about the opinions of law school administrators and faculty are taken from a related study for which 135 in-depth interviews were conducted (Reference Espeland and SauderEspeland & Sauder 2004). Among those interviewed were deans, associate deans, deans of admissions, directors of career services, and faculty from more than 50 law schools across the United States.

7 In 2003, 178 of 186 of the deans of ABA-accredited law schools signed this letter. The letter, entitled “Deans Speak Out,” is published along with a list of signers on the LSAC's Web site at http://www.lsac.org/deans-speak-out-rankings.

9 While the data that we have do not allow us to compute the USN score exactly (insufficient data are provided by the magazine), we were able to gain a good estimate of each school's standardized score by employing a model based on the information about each school that USN does provide. Specifically, we ran a linear regression of the factors that constituted the point total on the standardized score that USN provided for the top 50 schools. For each year, this provided an R2 between 0.983 and 0.986, meaning that the model explained more than 98 percent of the variation in the scores for these top 50 schools. Because USN does not provide standardized scores for schools in the second- through fourth-tier schools, we used the coefficients of this model to predict the standardized scores for these schools. We then combined the actual scores for the top 50 schools in each year with the predicted scores for the other 130 schools.

10 The opposite effect—a compounding rise in ranking and quality of students—is also sometimes mentioned, but with much less frequency. See Reference StabileStabile (2000) for a report on how the University of Toledo College of Law attempted to take advantage of just such an effect.

11 Prior to the 2001 edition, qualitative descriptions were published by the ABA in a separate volume. Many view the merger of the ABA and LSAC volumes as an attempt to provide a better source of alternative information for prospective students in order to counter the influence of the USN rankings.

12 Due to ties, the number of schools in the top 50 is sometime more than 50. The top 50 schools are determined by the top 50 scores, so schools that have identical scores have the same rank.

References

Benjamin, Beth A., & Podolny, Joel M. (1999) “Status, Quality, and Social Order in the California Wine Industry,” 44 Administrative Science Q. 563–89.CrossRefGoogle Scholar
Berger, Mitchell (2001) “Why the U.S. News and World Report Law School Rankings Are Both Useful and Important,” 51 J. of Legal Education 487502.Google Scholar
Boulding, William, & Kirmani, Amna (1993) “A Consumer-Side Experimental Examination of Signaling Theory: Do Consumers Perceive Warranties as Signals of Quality?,” 20 J. of Consumer Research 111–23.CrossRefGoogle Scholar
Caron, Paul L., & Gely, Rafael (2004) “What Law Schools Can Learn from Billy Beane and the Oakland Athletics,” 82 Texas Law Rev. 1483–554.Google Scholar
Elsbach, Kimberly D., & Kramer, Roderick M. (1996) “Members' Responses to Organizational Identity Threats: Encountering and Countering the Business Week Rankings,” 41 Administrative Science Q. 442–76.CrossRefGoogle Scholar
Espeland, Wendy Nelson (1998) The Struggle for Water: Politics, Identity and Rationality in the American Southwest. Chicago: Univ. of Chicago Press.Google Scholar
Espeland, Wendy Nelson, & Sauder, Michael (2004) “Quantitative Authority and the Reflexivity of Rankings.” Paper presented at the Annual Meeting of the Law and Society Association, Chicago, IL (3–7 June).Google Scholar
Espeland, Wendy Nelson, & Stevens, Mitchell (1998) “Commensuration as a Social Process,” 24 Annual Rev. of Sociology 312–43.CrossRefGoogle Scholar
Greene, William H. (2000) Econometric Analysis, 4th ed. Upper Saddle River, NJ: Prentice Hall.Google Scholar
Ippolito, Pauline M. (1990) “Bonding and Nonbonding Signals of Product Quality,” 63 J. of Business 4160.CrossRefGoogle Scholar
Klein, Stephen, & Hamilton, Laura (1998) “The Validity of the U.S. News and World Report Rankings of the ABA Law Schools,” Study commissioned by the American Association of Law Schools, http://www.aals.org/validity.html (accessed 4 November 2005).Google Scholar
Korobkin, Russell (1998) “In Praise of Law School Rankings: Solutions to Coordination and Collective Action Problems,” 77 Texas Law Rev. 403–28.Google Scholar
Law School Admissions Council (with the American Bar Association, Association of American Law Schools) (1996–2003) The Official Guide to ABA-Approved U.S. Law Schools. Eds. W. Margolis, et al.Google Scholar
Lempert, Richard (2002) “Pseudo Science as News: Ranking the Nation's Law Schools,” Paper presented at American Association of Law Schools, New Orleans, LA (3–5 Jan.).Google Scholar
McDonough, Patricia, et al. (1997) “College Rankings: Who Uses Them and With What Impact,” Paper presented at the Annual Meetings of the American Educational Research Association, Chicago, IL, March.Google Scholar
Milgrom, Paul, & Roberts, John (1986) “Price and Advertising Signals of New Product Quality,” 94 J. of Political Economy 796821.CrossRefGoogle Scholar
Monks, James, & Ehrenberg, Ronald G. (1999) “The Impact of U.S. News and World Report College Rankings on Admissions Outcomes and Pricing Policies at Selective Private Institutions,” National Bureau of Economic Research Working Paper # 7227.Google Scholar
Nelson, Phillip (1970) “Information and Consumer Behavior,” 78 J. of Political Economy 311–29.CrossRefGoogle Scholar
Podolny, Joel M. (1993) “A Status-Based Model of Market Competition,” 98 American J. of Sociology 829–72.CrossRefGoogle Scholar
Porter, Theodore M. (1995) Trust in Numbers. Princeton: Princeton Univ. Press.Google Scholar
Sauder, Michael, & Espeland, Wendy Nelson (2005) “Strength in Numbers? The Advantages of Multiple Rankings,” Paper presented at the Next Generation of Law School Rankings Symposium. Indiana University School of Law (15 April).Google Scholar
Schmalbeck, Richard (2001) “The Durability of Law School Reputation,” 48 J. of Legal Education 568–90.Google Scholar
Spence, Michael (1974) Market Signaling: Informational Transfer in Hiring and Related Processes. Cambridge: Harvard Univ. Press.Google Scholar
Stabile, Tom (2000) “How to Beat U.S. News,” 10 National Jurist 19.Google Scholar
Uzzi, Brian, & Lancaster, Ryon (2004) “Embeddedness and Price Formation in Corporate Law Markets,” 69 American Sociological Rev. 319–44.CrossRefGoogle Scholar
Whitman, Dale (2002) “Doing the Right Thing,” The Newsletter of the Association of American Law Schools 14 (April).Google Scholar
Figure 0

Figure 1. Distribution of USN Quality Scores 2000–2003.

Figure 1

Figure 2. Comparison of Distributions.

Figure 2

Table 1. Number of schools that have changed tier or rank, 1994–2003

Figure 3

Table 2. Pooled Cross-Section Fixed-Effects Prais-Winsten Regression of the Effects of USN Ranks on Student Decisions

Figure 4

Table 3. Pooled Cross-Sectional Random-Effects Prais-Winsten Regressions of the Effects of USN Rank on School Decisions

Figure 5

Table 4. Ordered Logit and Random-Effects Regression of the Effects of Student and School Decisions on USN Rank