Nearly a half-century later, Christopher Jencks's 1969 quip that “like a veritable Bible, the ‘Coleman Report,’ is cited today on almost every side of every major educational controversy” continues to ring true.Footnote 1 Whether the issue is the efficiency of schools, the imperative of integration, or the capacity of public education to solve social problems, the debate—public and scholarly—occurs in the shadow of the Coleman Report.
Though it is difficult to deny the Coleman Report's singular influence on conversations about American schooling, historians of education have an important role in properly situating it not just in the research on inequality or school effectiveness but also within larger historical narratives. There are two such narratives, in particular, that I hope to highlight here. The first concerns the historical development and operation of the “American education state”—that is, the variety of people, institutions, and governance structures that have both composed and constructed the American public education system.Footnote 2 A perennial operational challenge at the federal level has been developing ways to comprehend the sprawling system.Footnote 3 Though the collection and dissemination of statistics had been the responsibility of the Department of Education since 1867, as Douglas Reed notes in the opening of his book, nearly a hundred years later the federal government still lacked a basic capacity to gather information about the operation of local schools.Footnote 4
A partial solution to this information problem points to the second major story: the development of the infrastructure the federal government built to inform itself and the public about the operation of America's schools. Historians have increasingly documented the ways in which quantification serves as a technique of governance and a tool of statecraft, as well as the ways in which the data systems designed to produce these quantifications shape and become entangled with the underlying phenomenon.Footnote 5 Narratives of the history of education research have tended to focus on the shifting role and strategies of the federal government in supporting educational research or on the general failure of federally funded research to produce a basic science of education.Footnote 6 Largely missing from this story is the inclination and capacity (often secured through contracts) of those in the federal government to produce its own information about schools. The Coleman Report is part of this story—the provision in the Civil Rights Act requiring the survey reflects, after all, this desire for information—but the story does not begin with Coleman or the Civil Rights Act of 1964.
In this essay, I try to provide a view into this story by examining the development of a new kind of federally funded national education data project: the longitudinal dataset. Enabled by advances in sampling design, computer data processing, and the expanded university and think-tank research infrastructure of the Cold War, the national longitudinal dataset was unique among prior federal data-collection efforts, both in its intention to provide a nationally representative sample of American schools and students and in its aim to capture the relationship between student traits and abilities, school characteristics, and life outcomes. The first of these efforts, entitled Project Talent (1960–1975), spanned the commission, release, and reaction to the Coleman Report, and therefore provides a useful context for tracing broader shifts in the thinking about the role of schools in shaping life trajectories.
This context helps illustrate the way in which “manpower” development and the application of quantitative techniques such as systems analysis continued to inform federal data-collection efforts and interpretation, even as the rhetoric of education policy became increasingly studded with discussions of race, educational equity, and equal opportunity during the 1960s. Just as importantly, it highlights the ways in which large, nationally representative surveys like Project Talent and the Coleman Report invited policymakers and scholars to think in increasingly national, decontextualized ways about the operation and effects of American schools in general. That this data was, to an unprecedented degree, sufficiently large and accessible enough to allow for analysis and reanalysis also provided the opportunity for scholars and policymakers to draw conflicting conclusions about the character of American schools—the contrasting, but equally stylized, statistical portraits framing the need for different research and policies going forward.
Scholars often trace the interest in researching educational outcomes or examining the relationship between inputs and outcomes to the Coleman Report and debates over educational opportunity. However, prior to the passage of the Civil Rights Act of 1964, this relationship had become a matter of interest for a growing number of analysts who sought to understand how America could optimize its investment in schools in order to develop the intellectual talents necessary to win the Cold War. In studying this relationship, analysts sought to utilize the quantitative analytic techniques developed to guide military weapons development, strategy, and investment during World War II to solve the problem of school organization. Thus, in 1959, at the behest of the Ford Foundation, two analysts at RAND Corporation applied the company's trademark analytic tool—systems analysis—to the study of a school system.
RAND was one of a growing number of independent, though largely military-funded, research organizations that sought to develop quantitative techniques capable of analyzing the increasingly complex and interrelated systems that make up modern society.Footnote 7 Whether it was the design of urban spaces, the electrical grid, health care systems, or schools, researchers believed that applying these techniques would improve the design and operation of these systems in a way that would optimize their outputs.Footnote 8 The impetus for their development, and one factor driving their proliferation, was the increasingly strong belief among many social scientists that traditional analytic tools were insufficient to guide decision-making in a society that comprises increasingly complex systems and that is increasingly awash in data on their operation. Tools that could structure and simplify this complexity in a way that made rational choices possible were thus at a premium.Footnote 9
RAND's systems analysis approach combined and elaborated a variety of quantitative techniques, including cost-benefit analysis and a branch of military analysis known as Operations Research, in ways that provided for a quantitative comparison of a system's output given a variety of different system-input specifications. The resulting analysis would allow the analyst to recommend the optimal choice among a variety of competing options. This overriding concern for choice and optimization was evident in the RAND study of education. Emphasizing that from a system perspective no difference existed between an air force radar system, a business, or a school district— “in all of these systems there are various ways of combining elements or inputs in order to produce outputs”—systems analysis provided the opportunity to “‘try out’ innovations” by manipulating various inputs within the analysis and calculating their effects on cost and output. Though their analysis was preliminary, involving records from a single school district, the analysts concluded that their study demonstrated that “it would soon be feasible to make comparisons … that can help administrators and others choose improved educational systems [emphasis added]”—ones that maximized any number of potentially desired outcomes, from scholastic achievement and creativity to “social poise” and physical health.Footnote 10
The primary obstacle to achieving feasibility was not so much the complexity of the analysis, which they acknowledged, as it was the paucity of available data to use in their analyses. In contrast to so many other fields, there was simply insufficient school system data to feed into the systems analysis to produce useful comparisons of alternative choices. Still, the analysts took solace in the fact that the federal government, with appropriations made through the Cooperative Research Act (1954), had launched two massive data-collection efforts, the results of which they believed “will tell us what we need to know about the relationship between school characteristics and educational output.”Footnote 11
This research not only captures the new ambition to view American schools as a rational system composed of variables available for manipulation—whether via hypothetical analysis or policy—but also the way in which these views and the analytic methods that informed them shaped data collection on America's schools. It would be a former RAND analyst, Alexander Mood, who transformed the Civil Rights Act's call for a survey of educational opportunity into a massive quantitative survey analysis.Footnote 12 The desire for standardized data on America's schools proved easier to dream than to deliver. The experience of the early data-collection efforts referenced by the RAND analysts would reveal just how much the idiosyncratic reality of American schooling diverged from their vision, how much work it would take to bring it into view, and how much statistical airbrushing would be required to make it accessible to statistical analysis.
One of these federally funded efforts was called Project Talent and involved the most ambitious education research project ever attempted to date.Footnote 13 The project was led by John C. Flanagan, a professor at the University of Pittsburgh and founder of the American Institutes for Research (AIR) think tank. Flanagan, a Harvard trained psychologist, had spent World War II in the Army Air Forces Aviation Psychology Program designing test batteries more capable of predicting which recruits would succeed as pilots and which were better suited for alternative roles like bombardier or navigator.Footnote 14 After the war, he hoped to continue researching in this vein and to use AIR as a vehicle for applying these techniques to governmental and private-sector problems involving the development and selection of human resources.Footnote 15
In conceiving of Project Talent, Flanagan merged educators’ long-standing concerns with individual development and vocational guidance with contemporary Cold War concerns for maximizing American productivity.Footnote 16 These concerns included not only the narrow issue of identifying and expanding American scientific expertise but also the broader matters of maximizing labor force productivity by efficiently matching people with jobs well-suited to their abilities and educational opportunities to further develop their skills.Footnote 17 Flanagan believed these problems could be more effectively addressed if researchers could understand the fundamental relationships between educational systems, the development of student abilities, and their ultimate career outcomes. Knowing this relationship would allow both policymakers to better allocate school resources and school officials to provide more timely information to students about their likely career trajectories. As a sales brochure for the project's findings proclaimed: “To discover youth's aptitudes, talents, and creativity … to meet the country's acute need for trained personnel in all fields … Project Talent will yield accurate facts, understanding, and knowledge to turn potentialities into skilled manpower.”Footnote 18
The only way, in Flanagan's view, to ascertain these relationships—to know whether and how the potential was fulfilled—was to conduct a massive, longitudinal “census” of American talent and survey of American school organization.Footnote 19 Only a massive survey carried out over an extended period of time would allow him to determine the relationship between student talents, school variables, and career success across the entire occupational spectrum. The final design called for a nationally representative sample of 440,000 American high school students (roughly one out of twenty) and 1,353 high schools, with follow-up surveys conducted with students at one, five, and ten years after graduation. Beyond the immense logistical challenge the study design posed, the biggest obstacle to the study's execution was that, while Flanagan proposed a study of American talent, there were no standardized definitions for school features, pathways, or curricula. If Flanagan was going to bring into view a picture of the American school and the American student, he would have to do so not only through conducting a survey but through its construction as well. To help with this task, Flanagan enlisted a technical panel of thirty-one prominent researchers—including Henry Chauncey, E. Franklin Frazier, Samuel A. Stouffer, and Robert L. Thorndike—to develop from scratch a test battery that ultimately consisted of twenty-five academic and psychological subtests, a student interest and activity inventory, a measure of personal preferences, and two short open-ended essays, the entirety of which took two-and-a-half days to administer.Footnote 20
Despite Flanagan's hope, the resulting billion pieces of data mostly offered support for the “small relationship between the amount of student learning” and such school variables as “school size, class size, school building age, rural versus urban location, and dropout rate.” Flanagan also found considerable evidence that socioeconomic status was at least as important as academic achievement in predicting college enrollment.Footnote 21 Flanagan spun these findings as evidence of the ineffective guidance programs and failure of American high schools to develop individual talent. But they did not come close to fulfilling the promise of being able to divine the relationship between school characteristics, individual talent development, and career success. Though Flanagan and his associates were fond of likening standardized testing to the physical scientist using X-rays to study the crystalline structure of molecules, and the Project Talent data bank to the centuries of astronomical and botanical observations that led to scientific breakthroughs for Johannes Kepler and Charles Darwin, the seeming failure of his immense dataset to reveal the core structure of the school system was immensely disappointing.Footnote 22
This failure has led many historians to ignore or dismiss Project Talent as, in the words of one historian, “an exercise in overkill.”Footnote 23 But I want to suggest that the contemporary response to Project Talent provides insights into a major shift in educational research embodied in both Project Talent and the Coleman Report and the subsequent direction of large scale, federally directed research surveys.
First, it showed the intent of researchers to nationalize the conversation about the conception and quality of American schooling. At a time when many scholars, including James Conant, expressed skepticism about the value and wisdom of generalizing about the “American school” given the history of local control, Project Talent demonstrated that both the technical tools and analytic techniques necessary to conjure a stable, if stylized, image of the American school system had arrived.Footnote 24 While considerable local and state variation remained—and Coleman's research would highlight the importance of within school variation—conversations were increasingly driven by decontextualized generalizations about national and regional averages. Ironically, concerns state and local officials expressed about researchers’ ability to make direct comparisons between districts forestalled alternative designs that would have allowed for greater discussion of state and local variation—something that affected not only Project Talent and the Coleman Report but also the design of the National Assessment of Educational Progress (NAEP).Footnote 25 Beyond discussing school in national terms, Project Talent set a new standard for evaluating school effectiveness both longitudinally and in terms of life and career outcomes.Footnote 26
Second, the large-scale, quantitative, computer-readable data these surveys produced allowed the datasets themselves to become part of the story, as scholars analyzed and reanalyzed the data in an effort to extract new insights and discern its “real” meaning. Of course, while these massive datasets offered an unprecedented opportunity to study the relationships between students and schools, it did not do so equally. The resources and technical abilities necessary to analyze this data clearly favored scholars with statistical training at larger institutions with computing capabilities. To the extent that this data had an outsized influence on future research and policy discussions because of its size and national representation, it did so in a way that reflected the specific concerns of these scholars and the constraints of the survey creators. For instance, despite the vast amount of data Project Talent collected on students—and subsequent widespread use by scholars—one variable was omitted: race. This decision reflected the project's concern for individual development, not equal opportunity or racial justice.Footnote 27
Finally, despite the hope that the unprecedented size and detail of the Coleman Report and Project Talent would reveal the relationships between students, schools, educational opportunity, and career trajectories, they ultimately cast as much shadow as illumination. Whether one chose to interpret the darkness or the light—and what one saw in those spaces—offered a Rorschach test of ideological and methodological commitments. Christopher Jencks, for instance, argued repeatedly that, even beyond the Coleman Report, Project Talent provided the “best available evidence” of the inability of schools—regardless of their characteristics—to address inequity.Footnote 28 Others, however, like economist Alice Rivlin, who served as President Johnson's Assistant Secretary of Planning and Evaluation, explained away the Project Talent results by arguing that the dataset was large, but not large enough. What was needed was “a longitudinal data system for keeping track of individual students as they move through school”—a critique of Coleman's “snapshot” view and Project Talent's failure to collect information on course-taking and specific school resources directed at individual students. The real value of Project Talent, Rivlin argued, was that it justified the funding of “more complex and expensive longitudinal studies”—studies for which Project Talent served as the explicit blueprint.Footnote 29
Though the next federal longitudinal survey (NLS-72) would update its statement of purpose to include the study of “access to educational … opportunity,” the commitment to conceptualize education at the national level and to view schools as systems composed of different inputs, but nevertheless governed by generalizable rules that could be made visible through statistical analysis, remained the foundation of the enterprise.Footnote 30 Though Rivlin conceded that “the problem may be that the real world is not organized to generate information about [economic] production functions, no matter how cleverly the statistics are collected,” these concerns did not prevent policymakers and scholars over the last half century from endeavoring to try.Footnote 31 These efforts, the choices they involved, and the consequences for how we have conceptualized and evaluated the American education system remain an important, and underexamined, legacy of the Coleman Report.