Introduction
From the World2 and World3 models developed by Jay Forrester and the Meadows’ team in the 1970s to contemporary Integrated Assessment Models (IAMs), global computer models have served as virtual laboratories for studying economic, environmental and technological concerns simultaneously. As histories of post-war ‘futurology’ show, modelling has been a site of the confrontation between diverse perspectives on the long-term evolution of society, a site that itself is an interface between systems analysis, economics, energy research and development.Footnote 1
Global computer models emerged as tools for the exploration of the future in the early 1970s.Footnote 2 In the wake of The Limits to Growth, the ‘Great Debate’ on the future of the world led to the proliferation of long-term and large-scale models.Footnote 3 These grand futurological visions lost steam in the late 1970s, but in the 1980s and 1990s, a new kind of global computer modelling emerged to address climate change. IAMs, a category of models that represent interactions across human, environmental and technological systems and their long-term evolution, have become a dominant form of expertise on climatic futures.Footnote 4 While IAM modellers’ purpose, ambitions and communities are different, they are to some extent heirs of the 1970s futurologists.Footnote 5 For this reason, it is surprising that there are no historical studies that investigate the connections between the 1970s global computer models and the more recent development of IAMs.
Over the last decade, historians and sociologists of science have studied the emergence of futures research and cybernetics during the Cold War, in the East, West and South.Footnote 6 They have documented the establishment of specific communities and infrastructures of expertise on the future and shown how the future was made into ‘a scientific interest’ as well as ‘an object of control and intervention’.Footnote 7 They have highlighted how the Limits to Growth report in particular has been a turning point. It introduced a computer-based approach to futurology and brought together concerns about economic growth and about the environment.Footnote 8
Recently, another strand of literature developed about IAMs and their role as providers of the greenhouse gas emissions scenarios assessed by the Intergovernmental Panel on Climate Change (IPCC).Footnote 9 These studies have emphasised the ‘anticipatory politics’ of IAMs and analysed how they perform as ‘mapmakers’ or even ‘corridor-makers’ that constrain the future.Footnote 10 Their authors criticise the IAMs’ implicit techno-optimism and their focus on cost-optimisation. These discussions, largely spurred by scepticism of IAMs’ reliance on speculative ‘negative emissions technologies’, are re-igniting debates on economic growth that are reminiscent of those that accompanied The Limits to Growth.Footnote 11 To connect the two scholarly literatures on contemporary IAMs and Cold War futurology, we focus on how computer modellers have understood and represented technological change.
A common point across all these models is indeed that they incorporate, in a more or less mathematically formalised manner, technological change as one of the forces propelling the long-term evolution of society. In fact, the way these models represent technological change has been a crucial point of contention since the 1970s, highlighting tensions between neoclassical economists’ view of technological change and the ambitions of global modellers.
In this article, we analyse the different ways in which technological change was included in global models since The Limits to Growth. To do so, we focus on four moments when the representation of technological change in global models was debated. We start with the publication of the Limits to Growth in 1972 and continue with an analysis of the 1970s futures debate, when several global models were developed to provide more optimistic alternatives to the Club of Rome's prediction of collapse. We then focus on the IIASA Energy Systems Program whose main results were published in Energy in a Finite World in 1981 and which marked a shift towards a technology-focused approach. Finally, we examine the discussions about the ‘endogenisation of technological change’ in IAMs in the late 1990s–early 2000s, when modellers turned their focus to climate change. Our study is based on the collection and analysis of the reports and publications presenting the results of the global models considered.Footnote 12 We also confronted these texts to critiques and responses published in the literature.Footnote 13 To retrace debates on the endogenisation of technological change in IAMs, we studied discussions following the 1996 ‘WRE’ paper.Footnote 14 In particular, we relied on proceedings from several IIASA workshops on integrated computer modelling and technological change.Footnote 15
We argue that the development of computer models to address global problems turned technological change into an object of scientific investigation and a matter of collective concern. Efforts to model technological change were partly shaped by modellers’ relationship to economics, which was sometimes antagonistic, sometimes collaborative. While the futurologists of the 1970s viewed technological change as a social phenomenon and integrated it in their models as a translation of their views on the adaptability of society, participants in the IIASA Energy Systems Program started to envision it as an object of study with its own dynamics. The conception of technological change as a phenomenon that can be quantified and predicted was further developed when integrated assessment modellers tackled the integration of technological change as a parameter in their models, instead of an independent variable. However, as discussions on how to represent technological change became more specialised, they were also disconnected from the open confrontation of worldviews that characterised the 1970s futures debate.
Technological Change and The Limits to Growth
The Limits to Growth was the first attempt at addressing global problems using computer models.Footnote 16 A collaboration between the recently established Club of Rome and Jay Forrester's group at the MIT Sloan School of Management, its ambition was to analyse ‘the predicament of mankind’. It was published as a book in 1972 and soon became a best seller. The book's prediction that if growth patterns remained unchecked civilisation was doomed within a century shocked readers and spurred academic controversy.Footnote 17 However, the book's most important legacy was not its message so much as its methods: The Limits to Growth established global models as tools to debate a new kind of ‘global problem’ and connected environmental issues with futures research.Footnote 18
The report was based on Forrester's systems dynamics approach, which was originally developed to study industrial management (and later urban planning). After his first contacts with the Club of Rome, Forrester developed the computer models World and World2.Footnote 19 The Limits to Growth used the third version of the model, World3, developed by Forrester's student Dennis Meadows and the latter's team.Footnote 20 These models were applications of Forrester's systems dynamics approach at a global scale: they represented the world as a system of stocks and flows connected by feedback loops. The World models represented five main variables: population, capital, pollution, food production and non-renewable resources. They were highly aggregated, with the world represented as a single region. Despite the efforts of Meadows’ teams, the empirical data used in World3 remained sparse and imperfect.Footnote 21
The Limits to Growth warned against the adverse consequences of economic growth. The Italian industrialist Aurelio Peccei created the Club of Rome as an elite coalition of international bureaucrats, entrepreneurs and scientists who were worried about the problems generated by economic growth.Footnote 22 Along with the rise of the environmental movement, The Limits to Growth challenged the belief in progress and the technological optimism that had characterised futurology up until this point.Footnote 23
The strong stance and novel methodology of The Limits to Growth sparked heated debates, with many critics focusing on the way the study represented technological change. These controversies intertwined technical discussions on modelling choices, data or robustness of the results and philosophical considerations on the inevitability of civilisational progress. In addition to the competing global models developed soon after World3 (see next section), two critics of The Limits to Growth's modelling of change were especially prominent at the time: the Sussex Policy Research Unity (SPRU) and American economist William Nordhaus.
SPRU was an interdisciplinary research group founded by the British economist Christopher Freeman at the University of Sussex and is known as the birthplace of innovation studies. SPRU researchers were the first to have access to the technical documentation of the World models, on which they based their detailed critique, Models of Doom.Footnote 24 In 1973, Nordhaus, then a young economist at Yale University, penned a strong critique of the methodology of Forresters’ World Dynamics in the Economic Journal, to which Forrester replied. The two articles and the mutual incomprehension that they exhibited made clear the methodological gulf that separated systems dynamics from mainstream economics. For example, Nordhaus proclaimed he would ‘use standard economic terminology rather than Forrester's rather vague and often confusing appellations’.Footnote 25
SPRU and Nordhaus approached The Limits to Growth from different standpoints, but both pointed out how the outcomes of the World models were overdetermined by input assumptions and overlooked the adaptability of modern society – including adaptation via technological innovation. According to Marie Jahoda from SPRU, the model needed ‘an extra variable – man’.Footnote 26 In World3, there were no market mechanisms, no prices, no substitutability between resources or technologies – in short, none of the mechanisms used to represent flexibility in economic models. Researchers from SPRU, in particular, interpreted this absence of adaptability mechanisms as a way to constrain the model to ‘overshoot and collapse’ trajectories suited to the Club of Rome's message. They demonstrated that the way the capital subsystem was represented in the World models made ‘overshoot and collapse typical modes of behaviour of the computer model’, not the least because by design ‘it excludes the possibility of adaptive flexible responses to changing circumstances’.Footnote 27 ‘It is difficult’, they wrote, ‘to understand the reluctance of the MIT modellers to allow for adaptability, except in terms of their prior [political] commitments’.Footnote 28 Researcher from SPRU considered technological change as one dimension of adaptability.
Nordhaus similarly attributed the lack of flexibility in World Dynamics to an assumption that ‘human society’ is incapable of adaptation, which he believed was empirically false. He wrote:
The assumption of prices fits in the same mould as that about population, technology, and resources: human society is a population of insentient beings, unwilling and unable to check reproductive urges; unable to invent computers or birth control devices or synthetic materials; without a price system to help ration scarce goods or motivate the discovery of new ones.Footnote 29
These criticisms were not just about different perspectives on social and technological change or about disciplinary biases; rather, they were attacks on the model's foundations. Criticism regarding the poor representation of market mechanisms, capital and technological change went along with that of the lack of empirical foundation (largely due to a lack of suitable data) and ignorance of economic theory.Footnote 30 Both SPRU and Nordhaus carried out sensitivity analyses and found that small changes to the rate of technological change made the model behave differently – even to the point of obviating civilisational collapse.
However, the choice not to rely on technological progress as a vector of adaptability was explicit and, in fact, central to the very purpose of The Limits to Growth. Historian Elodie Vieille Blanchard has previously analysed the Club of Rome's perspectives on technology, noting that The Limits to Growth combined a conception of technology as a transformative, threatening global force with a vision of technology as a quick fix unable to address environmental issues.Footnote 31 This is reflected in the way technology was incorporated in the model design.
While technologies and technological change were not directly represented in the World models, their authors’ perspective on technology shaped the models in at least two ways. First, Meadows and her co-authors took an explicit stance against technological optimism:
We have felt it necessary to dwell so long on an analysis of technology here because we have found that technological optimism is the most common and the most dangerous reaction to our findings from the world model . . . Faith in technology as the ultimate solution to all problems can thus divert our attention from the most fundamental problem – the problem of growth in a finite system – and prevent us from taking effective action to solve it.Footnote 32
In the report, as well as in the design of the model, technological solutions were thus deemed to create new problems and reinforce the feedback loops leading to collapse, a choice that Meadows and her co-authors emphasised in their ‘Response to Sussex’.Footnote 33
Second, while there was no explicit representation of technology in the World models, in their ‘Response to Sussex’, Meadows and her co-authors explained that they had ‘[built] technological change into each relationship’ in the computer model and that they did so ‘by assigning possible technologies to three categories: those that are already feasible and institutionalised, those that are feasible but not institutionalised, and those that are not yet feasible’.Footnote 34 The latter were not included in the model at all because of an unease ‘with the idea of basing the future of our societies on technologies that have not yet been invented and whose side effects we cannot assess’.Footnote 35
In the rifts between Meadows’ team and both Nordhaus and SPRU, assumptions about technological change could not be disentangled from conceptions of society. Approaches to technological change in the debates were a materialisation of beliefs regarding the adaptability of societies rather than the result from an inquiry into the evolution of specific technologies. Technology was viewed, as Meadows put it, ‘as a social phenomenon’, be it a resource or a threat.Footnote 36
Global Computer Modelling, Technological Change and ‘the World Problématique’
In the wake of The Limits to Growth, several international organisations commissioned studies based on global models to address concerns over the physical limitations to economic development (e.g., non-renewable resources depletion, environmental deterioration, population growth). They particularly considered two dimensions of what the Club of Rome called the ‘global problematique’: tensions around energy resources after the first oil shock and the rebalancing of North-South relations in the context of the ‘New International Economic Order’ (NIEO).Footnote 37 These studies examined the conditions for the emergence of a harmonious world in which technological progress would be a driver of socio-economic structural change in the global North and South.
Three successive reports were particularly influential in debates on global issues: the Argentinian Bariloche Foundation's Catastrophe or New Society (1976), commissioned by the International Labour Organisation (ILO), Wassily Leontief's UN-commissioned report on The Future of the World Economy (1977) and the OECD project Interfutures (1978, 1979).Footnote 38 While they started with different perspectives on global issues, they all challenged the pessimistic vision of The Limits to Growth. According to these three reports, the future in a ‘business as usual’ scenario was gloomy, but it was still possible to avoid collapse.Footnote 39
The three studies offered contrasting futures to steer the world towards: Catastrophe or New Society? promoted economic growth that would be self-sufficient in the global South, while The Future of the World Economy and Interfutures offered globalisation as a ‘win-win’ solution for both the global North and South.
The report Catastrophe or New Society? adopted a South-centric perspective and dependency theory to refute the dramatic conclusions of The Limits to Growth. It started with the assumption that the needs of the Third World could be satisfied without breaching environmental limits. The report then explored the conditions under which the developmental needs of countries in the global South could be combined with an ‘egalitarian, fully participatory, non-consuming’ type of society.Footnote 40 The computer model developed by Amilcar Herrera's team focused on how to meet basic needs (access to food, shelter, water, energy, etc.) in Southern countries over the following decades.Footnote 41 The model optimised life expectancy in each region while also fulfilling needs in terms of education, urbanisation and food. On this basis, the report concluded that another developmental path was possible, one that did not rely solely on maximising GDP, but rather on reducing ‘wasteful consumption’ and distributing basic goods and services equally. Technology, however, would be essential to reach even these modest goals. The report presented a simulation with no technological change which considerably limited the possibility of the South meeting its basic needs; by contrast, in their scenario with technological progress, production would grow faster than population and thus raise living standards.Footnote 42
The report on The Future of the World Economy was a commission from the UN to Nobel prize-winning economist Wassily Leontief. It followed the adoption by the United Nations General Assembly of development objectives for Southern countries in May 1974, within the so-called ‘second development decade strategy’ and the Declaration for the Establishment of a New International Economic Order aimed to rebalance North-South relationships.Footnote 43 Leontief's task was to elaborate future roadmaps to reduce socio-economic inequalities between the global North and South.Footnote 44 Leontief had experience in the study of interactions between the economy and the environment.Footnote 45 He also had a long-standing relationship with the UN.Footnote 46 The study aimed to investigate how the growth objectives of the International Development Strategy and of a NIEO would impact the North-South economic gap, the availability of natural resources, food production and the mitigation of pollution.
The Future of the World Economy was based on the input/output method developed by Leontief in the 1940s to describe the economy. It extended the formal representation of flows between sectors (e.g., transport, energy, buildings, agriculture) to account for the interactions between economy, pollution and resources. While this modelling approach differed from the one used in Catastrophe or New Society?, it reached similar conclusions. Leontief stressed that ‘there are no insurmountable physical barriers to the accelerated development of developing regions in the 20th century’.Footnote 47 For instance, feeding a rapidly increasing population in developing regions could be solved by cultivating large areas of unexploited arable land, on the basis of optimistic assumptions of potential agricultural yields stimulated by fertilisers and extensive use of machinery. Pollution was also considered as ‘a technologically manageable problem, and the economic cost of keeping pollution within manageable limits is relatively high but not unbearable’.Footnote 48 According to the report, the main challenge was to ensure the transition of Southern countries to a higher level of development, an approach that followed Walt Rostow's model of growth.Footnote 49 Departing from the Bariloche report's self-sufficient development perspective, the report highlighted economic interdependence as a major enabling condition for development. It considered technological change mainly insofar as technological transfers from the North to the South could foster industrialisation without excessively increasing foreign debt.
Two years later, the Interfutures study further emphasised economic interdependence based on the market, on technological progress and on technological transfers as a way to organise North-South relationships. In 1976, the OECD set up the Interfutures research committee. Headed by Jacques Lesourne, a French civil servant and futurologist (prospectiviste), the committee explored different pathways for long-term world development as well as the interdependence between the global North and South. While the Leontief report aimed for a relatively consensual approach to North-South relationships, the Interfutures report explicitly sought to help Western countries develop strategies to adapt to interdependence by taking control of it, in response to the Third-Worldism of the Bariloche report or to the environmentalism of The Limits to Growth.Footnote 50 A range of scenarios assuming different configurations of cooperation between North and South countries (from full cooperation to opposition and protectionism) and societal expectations about the type of economic growth were evaluated using a modified version of the global computer model developed by the System Analysis Research Unit (SARU) at the UK Department of the Environment. The report concluded that a scenario based on growth, on international free trade and on a persisting productivism development model was the best option. The Interfutures study viewed technological transfer from North to South as one way to integrate the liberal economic order.
These three reports presented contrasting positions on the relations between the global North and South countries and reflected different understandings of the role of technological change therein. Despite their contributions to addressing complex and systemic issues, a closer look into the computer models reveals a gulf between the ambitions of these global futures studies and the formal representation of technological change in the models they used.Footnote 51 The formal representation of the mechanisms linking technological change with key other aspects of development, such as land availability, (re)location of industrial activities, or changes in behaviours, remained a weak point in these exercises.
In these three reports, technological change was not a topic of inquiry in itself; rather, it was one element in the global economic order. The reports presented contrasting views of how technological change could be made to influence development and North-South relationships. Yet, despite their very different standpoints, the authors of these reports all relied on similar tools to integrate technology as a variable in their models.
Instead of developing new methods to represent the link between technological change and development, the three studies drew from the neoclassical toolbox of economics to represent the economy. They used production functions and synthetic coefficients and articulated them in an ad hoc manner. The combination between the capital (e.g., machinery) and labour (e.g., workers) needed to produce a certain quantity of goods in a Cobb-Douglas production function served as a proxy of technological change. Hence, the more was invested in new efficient technologies, the less labour was needed.
Yet, this type of production function (also used in Robert Solow's growth modelFootnote 52), combined with a synthetic exogenous coefficient encapsulating the relation between technological change (substitutability between technologies) and the productivity of the economy, did not permit the explicit formal representation of changes in living standard (e.g., the relationship between technological change, land-use planning and demand dynamics) that were nonetheless assumed in the scenarios.Footnote 53 There was also no specification of the impact of changes in relative prices on the choice of technologies in any of the three computer models.Footnote 54
The use of conventional neoclassical tools can be interpreted in two ways. On the one hand, it illustrated how adaptable these economic tools were in modelling global issues. Though somewhat paradoxical in the case of the self-sufficient approach of the Bariloche report, the use of a Cobb-Douglas production function relying on a substitution between capital and labour is justified as a ‘very important [characteristic], particularly for the underdeveloped countries, where it is essential to be able to substitute capital with labour’.Footnote 55 On the other hand, these computer models reflected the then limited formal representation of economic mechanisms (e.g., the role of prices), especially applied to technological change. This was less a problem of skills in modelling teams than a reflection of the gaps in economic research at that time.
The IIASA's Energy Systems Program: A Turning Point?
One of the last futurological reports of the period, Energy in a Finite World, was released in 1981, at the end of the first golden age of global computer modelling.Footnote 56 It summarised the results of the IIASA Energy Systems Program, started in 1973 and led by Wolfgang Häfele.Footnote 57
The Energy Systems Program was a constellation of studies, collected in the final report and brought together by the narrative presented by Häfele. This narrative highlighted the computer-generated energy scenarios as one of the running threads and main contributions of the programme. The Energy Systems Program was an extremely ambitious modelling exercise. In addition to its modelling framework and scenarios, it included a detailed study of patterns of energy substitution based on the analysis of historical data as well as one of the first analyses of the relationship between different forms of energy production, CO2 emissions and the climate.
As Eglė Rindzevičiūtė has shown, IIASA established itself as an international hub for systems analysis soon after its founding in 1972.Footnote 58 The number of contributors to its first report was far beyond those in previous global computer modelling exercises. Some 148 ‘members of the IIASA energy systems program’ representing a large range of disciplines, nationalities and viewpoints had spent time at IIASA between 1972 and 1981. They included Amory Lovins and Dennis Meadows, who later voiced strong criticisms against the IIASA computer models and scenarios (see below).Footnote 59
The goal of the IIASA Energy Systems Program, as presented in the introduction to Energy in a Finite World, was ‘to understand and to conceptualise by qualitative and quantitative means the global long-range aspects of the energy problem’ and ‘to view [the different aspects of the energy problem] as an integral part of an overall pattern’. While its ambition to provide ‘a long-range, global view of the problems facing civilization’ was in keeping with the overarching, civilisational outlook of 1970s futurology, Energy in a Finite World was primarily interested in energy.Footnote 60 It adopted a more technical and resource-focused perspective. Indeed, as the authors explained, ‘the focus of our study was principally on the natural science aspects of the energy problem, and our methods were primarily those of engineering and economics’.Footnote 61 The explicitly non-political stance of the report can be understood as part of the construction of the neutrality of IIASA as an institutional bridge between East and West.Footnote 62 ‘Quite simply’, the institute itself declared, ‘IIASA is an international institute that seeks to provide a service to its national member organizations, and in this case that service takes the form of clarifying a factual basis upon which certain political issues might be settled’.Footnote 63
In contrast with previous global computer modelling exercises where model-building and normative standpoints were explicitly intertwined, the IIASA energy study used modelling and the methods of engineering and economics to elicit ‘a factual basis’ to the ‘energy problem’. This constructed the energy system as something that could at least partly be studied and understood without considering political and economic organisation (again, this can be understood in light of IIASA's neutrality). When it comes to the consideration of technological change, the Energy Systems Program's ambition for objectivity manifested itself in two contrasting ways. On the one hand, researchers at IIASA developed a modelling framework that was supposed to limit the influence of the authors’ bias on the results. This did not fully succeed, because, as critics pointed out, input assumptions over-determined the results. The assumptions on technologies and technological change in the modelling framework, especially, were largely based on guesswork reflecting their authors’ visions. On the other hand, the less publicised study of energy substitution patterns by Cesare Marchetti and Nebojša Nakićenović marked a first attempt at characterising technological change as a distinct phenomenon that could be grasped and predicted independently of considerations on society or politics.
The most discussed part of the Energy Systems Program were the scenarios exploring energy paths to 2030. The report presented two detailed scenarios (High and Low) and three less detailed alternative scenarios. To produce them, researchers at IIASA used three computer models:Footnote 64 a static accounting model, MEDEE, to estimate final fuel consumption; a linear programming model of energy supply, MESSAGE; and an input-output model, IMPACT, to estimate the direct and indirect costs of the energy supply scenarios. The three models were intended to run in a feedback loop that would allow them to self-adjust.Footnote 65 By the time the report was published, the computer models were not completely documented. The feedbacks between the three models were still operated manually, though the procedure was reportedly being streamlined. Crucially, in this first study, the modelling framework relied on expert guesses and informal assumptions for inputs.
The debates that followed the publication of Energy in a Finite World show that assumptions about future technologies were still a major point of contention. The critiques by Dennis Meadows, Amory Lovins, Bill Keepin and Brian Wynne – four people who spent time at IIASA in the 1970s – all challenged, in different ways, the neutrality of the IIASA scenarios in terms of technology choices. The results, they argued, were determined by the worldviews of the program leaders rather than derived from the models.
In a review of the IIASA scenarios, Meadows noted that the modelling framework lacked consistency (because its three models were each based on a different modelling paradigm) and was too complex to be grasped by a single analyst, thereby questioning its capacity to address global issues. More importantly, he remarked that the modelling framework was ‘more an accounting system than a forecasting device’.Footnote 66
Keepin, who was working at IIASA during the energy program and thus had had access to the computer models, took this criticism further.Footnote 67 His sensitivity analyses suggested that the IIASA modelling framework did not, in fact, do much more than reproduce the assumptions fed into it. As a result, Keepin and Wynne qualified the IIASA modelling framework as ‘highly unstable and based on informal guesswork’, a feature which, they argued, was only made worse by the discrepancy between what the model was said to perform in the institute's many publications and talks and what the model could actually do.Footnote 68
If, as Meadows, Keepin and Wynne suggested, the scenarios were overdetermined by their assumptions, the discussion could turn towards the modellers’ choices of inputs, which, by the admission of IIASA, were largely informed guesses. Keepin questioned the representation and role of specific technologies in the IIASA scenarios, especially nuclear and fast-breeder reactors: he found that the predicted evolution of the energy mix towards a high share of fast-breeder reactors hinged on the (unexplained) assumption of a sharp drop in their costs around 2000.
In this context, criticisms based on the rejection of Häfele's visions of the future of energy, such as those of Lovins, gained salience. Lovins’ vision of ‘soft energy paths’ was firmly opposed to the centralised, high-energy-demand future envisioned in the IIASA scenarios.Footnote 69 He dismissed the IIASA group's assumptions as biased by their preferences for some technologies over others: their economic assumptions, he wrote, ‘are in general optimistic – estimating costs well below empirical levels – for technologies they like (such as nuclear power) and pessimistic – estimating costs well above empirical levels – for those they like less or know less about (such as appropriate renewables)’.Footnote 70 In other words, the IIASA computer models, despite their ambition for sophistication and self-correction through feedback loops, did not disconnect the scenarios from their authors’ assumptions and inclinations, but merely reproduced those assumptions. Further than that, as Meadows pointed out, the models did not help in addressing fundamental disagreements such as those between Häfele and Lovins. For Meadows, the IIASA modelling framework could not ‘in anything like its current form, be used to prove whose view is more “true”. Nor can the modelling system point out the differences in policy impacts that are really important to Lovins and Häfele: these differences are not quantified within the model relationships’.Footnote 71 The complexity of the modelling framework, instead of increasing the objectivity of its results by separating them from the modeller's guesses and worldviews (as was claimed in many publications), reinforced their assumptions by giving them an appearance of objectivity and neutrality.Footnote 72
However, Energy in a Finite World was not limited to the energy scenarios and the sparsely documented assumptions on technology that underpinned them. The Energy Systems Program also included an effort towards the formal analysis of patterns of change in energy resources and technologies. Contrary to the scenarios, it was praised by the critics of the computer model framework.Footnote 73 Marchetti and Nakićenović relied on industrial economics, especially on Edwin Mansfield's theory of technological substitution, to conduct a systematic analysis of the evolution of the energy system as a regular, predictable phenomenon whose patterns and dynamics could be characterised in terms of ‘substitution rates’, ‘buildup rates’ and ‘takeover times’.Footnote 74 Their analysis of market penetration and energy technology substitution was based on an extensive study of historical data, comprising ‘three hundred individual cases concerning different energy subsystems in sixty different data bases encompassing thirty countries, over a long historical period from 1860 to 1975’.Footnote 75 This detailed empirical analysis of technological change in the energy system contrasted with the overarching, sometimes philosophical stances on technological change that prevailed in the 1970s debates on global computer models. While Marchetti and Nakićenović drew on economics, their reliance on historical data about existing technologies also distinguished their analysis from neoclassical models of technological change, such as Nordhaus’ notion of an unspecified ‘backstop technology’ that would provide infinite energy supply. By bringing in concepts developed by industrial economists to study competition at the firm level, Marchetti and Nakićenović developed a theoretical and methodological repertoire for the study of technological change and its inclusion as a variable in global computer modelling. These tools made it possible to formalise technological change as a phenomenon that could be observed, quantified and predicted using empirical data. Nakićenović, in particular, pursued this work further throughout the 1980s and 1990s.
‘Endogenising’ Technological Change: IAM Computer Modelling and the Evaluation of Climate Change Mitigation Strategies
While global computer modelling had somewhat lost steam at the beginning of the 1980s, the emergence of climate change on the international agenda in the late 1980s revived it. New kinds of global computer models were developed and used to assess the techno-economic implications of global greenhouse gas emissions pathways compatible with Article 2 of the UN Framework Convention on Climate Change adopted in 1992, whose objective was the stabilisation of atmospheric greenhouse gas concentrations.Footnote 76 Technological change became a key research topic in economics and energy research as the availability, development and diffusion of low-carbon technologies was supposed to impact carbon abatement. It was one of the main areas for improvement identified during academic conferences gathering energy and economic modellers in the 1990s, such as the workshop held at IIASA in October 1993 on integrated modelling.Footnote 77
The precursors of IAMs initially represented technological change as an exogenous variable. The need to ‘endogenize’ technological change, that is to represent it as a dependent parameter in the computer models, emerged when dealing with the issue of the timing of climate action. To adhere to the UNFCCC's Article 2 and the precautionary principle (which was also included in the UNFCCC), energy experts interpreted the timing of climate action in a sequential decision framework. In other words, the question was: given uncertainties regarding the scope of climate change and technological progress, should the fight against climate change begin early on or be delayed, and what were the costs of either decision?
Academic discussions on the role of technological change in the fight against climate change were stimulated by the discussion of optimal greenhouse gas abatement pathways. As the main global emitter, the United States was at the forefront in international climate negotiations but relatively behind when it came to action.Footnote 78 US economic expertise of the time partly reflected this position.Footnote 79 In the so-called ‘WRE paper’ (named after its authors’ initials) published in Nature in January 1996, three US energy-climate experts and modellers, Tom Wigley, Richard Richels and Jae Edmonds, mobilised technological change as an argument to defer susbstantial climate action. This position was similar to that of the US government in international climate negotiations.Footnote 80
Drawing upon the results from IAMs such as DICE developed by Nordhaus and MERGE2 developed by Richels and another pioneer of integrated assessment modelling, Alan Manne, the WRE paper concluded that technological advances would bring prices down, avoid accelerated and costly replacement of productive capital stocks (buildings, plants, machinery tools etc.) and make greenhouse gas emissions abatement cheaper in the future. It was therefore preferable to delay substantial abatement (not indefinitely) and focus instead on technological research and development.
Part of these conclusions followed from the way the MERGE model represented technological change. All technological developments were assumed to occur independently of emissions abatement efforts and market conditions. An exogenous factor, the Autonomous Energy Efficiency Improvement coefficient (AEEI), served as a proxy for the decoupling between GDP and energy consumption over time and for the underlying progress toward more energy-efficient technologies. In most IAMs, this coefficient impacts the total productivity of capital, labour and energy which are in Cobb-Douglas production functions.Footnote 81 The higher it is, the more it increases the total productivity. Yet, the value of the AEEI coefficient was shown to align poorly with historical data.Footnote 82
More fundamentally, for energy economists such as Michael Grubb (an expert in energy innovation who was particularly critical of WRE's conclusions) the representation of technological change as an autonomous force was based on a misunderstanding of the role of markets and policies in the development of technologies. Technological change was not a ‘manna from heaven’, Grubb argued, and WRE overlooked methodological innovation in economics toward models of induced technological change.Footnote 83 To support their criticism, Grubb and other energy economists referred to Kenneth Arrow's 1962 conceptualisation of ‘learning-by-doing’. This concept encompasses the learning effects related to the improvement of the production process, to the design of a technology and to reduced production costs. The performance and the cost of a technology are then supposed to decrease exponentially with its use. In the context of negotiations on the Kyoto Protocol, the more precise account of economic patterns of climate change, technological lock-in and inertia in technology development and diffusion that Grubb advocated provided arguments for early climate action in the academic debate.Footnote 84
In the 2000s, the evolution of climate negotiations and the ensuing demand for scenarios exploring the techno-economic conditions to stay in line with the 2°C target fostered efforts towards the integration of technological change patterns in IAMs. This was meant to increase the policy relevance of IAMs.Footnote 85 Refining how technological change was represented in IAMs was high on the modelling agenda from the mid-1990s onward. This topic fostered dialogue between energy and economic modellers.Footnote 86 By the mid-2000s, most IAMs included explicit and complex mechanisms of technological evolution induced by pressure on prices or by research and development policies (e.g., learning-by-doing, learning-by-searching). The IPCC's Fourth Assessment Report in 2007 showcased the tremendous sophistication of the representation of technological change compared to the Third Assessment Report six years prior.Footnote 87
These developments mostly took place during Modelling Intercomparison Projects (MIP) focused on improving the representation of technological change in IAMs.Footnote 88 Among these projects, the two-year-long nineteenth session of EMF addressed the endogenisation of technological change into IAMs. The project gathered the most prominent integrated assessment modelling teams of the time and resulted in the publication of a special issue in Energy Economics.Footnote 89
IIASA also continued to orchestrate discussions between energy and economics modellers, building upon its longstanding position as a hub for global systems research. As leading figures in IIASA energy modelling programmes, Nakićenović and Arnulf Grübler were active participants in the debates within the IAM community during the 1990s. In close collaboration with Nordhaus, they organised three workshops dedicated to the theoretical challenges of, and first attempts at, endogenising technological change in IAMs, which took place at IIASA in June 1997, March 1998 and 1999.Footnote 90
In this context, Nakićenović's work with Marchetti and later with Grübler, for instance on technological substitution and technological learning curves, was a reference for energy modellers in their efforts to represent technological change as an endogenous factor in IAMs.Footnote 91 It served as input for the joint development of scenarios using IIASA in-house model MESSAGE in collaboration with the World Energy Council.Footnote 92 Grübler and Andrei Gritsevskii also developed a model of endogenous technological change which influenced further developments in IAMs.Footnote 93 Overall, IAMs have followed two main directions: either they represent the impact of generations of investment in R&D, or they integrate more in-depth learning-by-doing mechanisms, the adoption of one technology resulting from cumulative installed capacities (e.g., through a learning rate which corresponds to the rate of cost reduction of a technology related to a doubling of its accumulated production).
Climate change and the evolving agenda of international climate negotiations have therefore stimulated intense efforts to model and assess the role technological change could play in low-carbon pathways. Integrated assessment modellers have developed new methodological ways to represent technological change in global computer models. In this process, technological change became formalised as a techno-economic parameter. Efforts towards its endogenisation within global IAMs were a nodal point for dialogue between different approaches to global computer modelling, especially between economists and energy computer modellers. These evolutions relied to some extent on the work on the dynamics of technological change carried out at IIASA in the 1980s, and in that respect they are heirs of the IIASA Energy Systems Program. Whereas in the 1970s debates about the role of technological change in global issues intertwined philosophical considerations with the practicalities of emerging computer modelling techniques, in the 1990s discussions narrowed down to more technical model-related matters. The focus was on how to achieve a more detailed representation of the connections between the state and the market in driving technological change as global modellers directed their focus towards specific, policy-relevant questions – chiefly the timing of greenhouse gas emissions abatement measures.
Conclusions
This article retraced evolutions in the way global computer modellers viewed and simulated technological change in their models since the publication of The Limits to Growth in 1972.
From this history, three main conclusions can be drawn. First, the notion of technological change, as well as its understanding as a predictable parameter affecting the future of society, was not a given. In the history of global computer modelling since the 1970s, it is possible to identify at least four distinct avatars of technological change: a proxy for human adaptability; a dimension of socio-economic changes in the world order; a measurable, predictable phenomenon; and a techno-economic parameter in computer models.
In the debates on The Limits to Growth, technological change was used as a proxy for human adaptability. In the Malthusian outlook of Donella Meadows and her co-authors, it was not possible to rely on technological change as a way out of crises; on the contrary, their critics considered that this overlooked humans’ capacity for change and adaptation. In post-Limits to Growth global models, technological change was rather considered as a dimension of the international order, with a focus on North-South economic relations. The results from these global models offered rather optimistic visions of humanity as able to adapt to environmental and socio-economic constraints, but starkly contrasting views on how to organise North-South relations. Their authors attempted to conceptualise the role of technological change and technology transfers (among many other factors) in economic development. However, they all used a similar set of tools to integrate technology in their models of the economy, tools which did not result from in-depth inquiry into the evolution of technologies but relied on available neoclassical economics. It was only during the IIASA Energy Systems Program that a conception of technological change as a phenomenon that could be studied empirically and predicted emerged. This was in line with the attempt of IIASA to frame the energy problem in a neutral, apolitical way. Finally, in the 1990s and 2000s, with the emergence of IAMs and the shift in focus of global modelling towards climate change, technological change was formalised as a parameter in the models. Discussions then became increasingly specialised and disconnected from considerations about social change.
Second, the formal representation of technological change itself in computer models has been a matter of technical innovation. Technological change progressively emerged and was structured as a specific category and an object of academic interest, which suggests a professionalisation of the methodologies used to study it. In the first global exercises of the 1970s, it was represented with limited formalisation, as an external dynamic embedded into proxies and coefficients. The normative ambitions of the 1970s global modelling studies contrasted with their limited innovation in modelling techniques: either because the data were then sparse, or because the computer models mostly relied on the toolbox of neoclassical economics, their representation of technological change remained quite limited. The IIASA Energy Systems Program was one of the most impressive attempts at formalising and modelling the dynamics of technological change: in Marchetti and Nakićenović's analysis of market penetration and substitution patterns, technological change appeared as a distinct phenomenon with its own quantifiable dynamics. This perspective influenced the renewal of global modelling driven by the emergence of IAMs to evaluate greenhouse gas emissions pathways from the 1990s onward. Integrated assessment modellers developed a more detailed formal representation of technological change to address questions related to climate change, especially the timing of greenhouse gas emissions reductions. Their work was shaped by the interactions between energy and economic perspectives on technological change. The demand for assessments of mitigation strategies related to the agenda of the international climate negotiations drew attention to the connection between the energy and economic dynamics. This spurred efforts to endogenise technological change within IAMs. IIASA was involved in these efforts, due to the continuous work of its researchers on the historical development of technologies since the 1970s Energy Systems Program.
Third, the development of a more sophisticated, formalised conception of technological change in the 1990s and 2000s suggests that the turn of global computer modelling toward energy and climate policy issues accompanied a disconnection from the explicit discussion of conflicting worldviews. Contrary to the 1970s global exercises where the legitimacy of technological change was a core philosophical issue intimately linked to visions of change in society, the role of technological change in the assessment of climate strategies raised limited political discussions (even though the conclusions of the assessments had some influence on climate politics). Despite ambitions for scientific neutrality, in the 1980s the scenarios of the IIASA's Energy Systems Program also were criticised as mere reflections of the assumptions of their authors.
In the 1990s and 2000s, the main focus of global computer modelling was on the timing of action in a context of uncertainty about climate change and technology availability. As technological change became increasingly formalised as a quantifiable parameter in IAMs, attention to its relations to social change dwindled. Whereas in the controversy around The Limits to Growth, technological change was explicitly understood as a social phenomenon, the debates around its endogenisation in the 1990s treated it rather as an academic object, without open confrontation of the assumptions underpinning its formalisation and of their political implications.
However, as climate objectives today are becoming increasingly ambitious, debates around climate mitigation pathways have brought these assumptions about technological change back in the limelight. Negative emissions technologies, especially the combination of bioenergy with carbon capture and storage (BECCS) to achieve negative emissions, now have a substantial place in IAM scenarios, despite not being readily available at scale. The IAMs’ reliance on such technologies has provoked debates on the assumptions and choices underlying the models – and especially their assumptions regarding technological change.Footnote 94 At the same time, calls for low-demand and degrowth emissions pathways are reigniting discussions about the role of technological optimism in framing our view of the future.Footnote 95 Technological change is, once again, discussed as part of social change.
Acknowledgements
This research was supported by the Chair MPDD (Prospective modelling for sustainable development) headed by Pont Paristech and Mines Paristech. The authors are grateful to Julia Nordblad and Troy Vettese for their invitation to contribute to this special issue and for their insightful comments on previous versions of the manuscript. We also thank three anonymous reviewers for their generous engagement with our manuscript, as well as several colleagues, including Kristin Asdal, Liliana Doganova, Stine Engen, Antonin Pottier and Marie Stilling Hansen.