Skip to main content Accessibility help
×
Hostname: page-component-f554764f5-fnl2l Total loading time: 0 Render date: 2025-04-20T17:02:06.089Z Has data issue: false hasContentIssue false

Chapter 1 - Talkin’ Bout a Revolution

A Paradigm Shift

from Part I - Setting the Stage

Published online by Cambridge University Press:  20 March 2025

Falk Huettig
Affiliation:
Max-Planck-Institut für Psycholinguistik, The Netherlands

Summary

The beginning of the third millennium, starting in the early noughties and increasing in strength throughout the 2010s, has seen a large shift in theoretical focus in the mind sciences. In what might be called the predictive revolution or the predictive turn, many researchers in the psychological and brain sciences have come to consider the human mind a ‘predictive engine’ or ‘prediction machine.’ Like its predecessor, the cognitive revolution, more than half a century before, the predictive revolution is grand in ambition. It tries to explain all mental processes within one common framework. In this unified theory, the functioning of the mind is no longer best explained as an information processor: Minds have become prediction systems. The predictive revolution promises to reconcile cognition and behavior as the intrinsically connected two sides of the same coin serving human interactions with the environment.

Type
Chapter
Information
Looking Ahead
The New Science of the Predictive Mind
, pp. 3 - 11
Publisher: Cambridge University Press
Print publication year: 2025

The all-powerful King Louis XVI of France, grandson of the Sun King of ‘L’état, c’est moi’ (‘The state, that’s me’) fame, held absolute autocratic power unbounded by a constitution or any other laws curbing his might. That all changed on July 14, 1789, a day that changed the course of world history. Events unfolded that gave Louis XVI his nickname of Louis the Last.

The skyline of Paris was dominated by the eight high towers of the Bastille, a symbol for the complete and unquestioned authority of the French monarchy. The royal fortress and prison held only seven prisoners, but the angry crowd that had gathered before the building had not come for them. They were after one thing: gunpowder. When the prison governor refused to hand it over, the mob stormed the building and took it by force. The governor was pulled away, insulted, and spat on. After periodic beatings, his body was peppered with daggers and bayonets, riddled with pistol shots, his head sawed off with a pocketknife, and then paraded on a pike round the streets of the royal capital. Louis XVI and his wife Marie Antoinette were eventually guillotined in Paris in the bloody years that followed Bastille Day, now the national day of France.

The old order had collapsed, the unimaginable had happened. Europe’s monarchs watched in horror and panicked. The French Revolution is remembered for the tens of thousands of people who were executed and murdered, for the hundreds of thousands of people who died in the Napoleonic Wars that followed it, but also for the radical political and social transformations that it gave rise to. The French Revolution is widely celebrated for ringing the death bells for feudalism in Europe, for the ideals of liberty, equality, and justice it inspired, and for the ‘Declaration of the Rights of Man and of the Citizen’ granting what we now call basic human rights to some common people. There is a consensus among historians that a complex combination of economic, social, and political factors provided the breeding ground for the revolution.

Yet no one had seen it coming.

The sudden rupture of the French Revolution altered how intellectuals thought about historical change. The word revolution before the dramatic events in Paris was restricted to refer to the cycles of celestial bodies such as the stars and planets seen from Earth. A revolution denoted a cycling back to a previous situation or condition. After the momentous events in Paris, the term revolution acquired the new meaning of a radical, typically sudden, and unexpected departure from an old order in favor of a new system, approach, and worldview.1

The understanding that change in science is also rarely anticipated and rather sudden came 150 years later. American physicist and philosopher Thomas Kuhn was the first to notice that science does not progress on a linear and continuous trajectory the way laypeople typically imagine the scientific accumulation of knowledge to occur. His book The Structure of Scientific Revolutions has become one of the most widely cited works in the social sciences.2 Kuhn’s insight was to describe scientific progress as a process of periodic paradigm shifts during which an established conceptual framework about a scientific problem is abruptly abandoned in favor of a new one: A scientific revolution occurs when, during a relatively short period of time, large numbers of researchers adopt a new way of thinking.

1.1 Behaviorism and the Cognitive Revolution

A classic example of a Kuhnian paradigm shift in psychology, the study of human behavior, is the cognitive revolution which took place in the 1950s.

Before the arrival of cognitivism, behavioral scientists believed that only an immediately observable response to an observable stimulus could safely be interpreted. A stimulus in this regard is any event or thing that produces a reaction. In psychological experiments assorted loud noises might be presented to participants to measure their varying startle responses. The idea thus was that behavior could be explained by observing the direct reaction of an organism to a stimulus it was presented with. This type of investigation of direct stimulus–response relationships, called behaviorism, was the dominant theoretical approach in psychology in the first half of the twentieth century, especially in the United States.

Behaviorists did not go so far as to claim that things that cannot be observed directly did not exist, but considered them highly suspect as a topic of scientific study. Behaviorism3 hence rejected not only dualism, the notion that the physical body and a soul could be separated, but also the notion that the physical body and mental phenomena could be investigated separately. The soul, but also any mental phenomena, behaviorists argued, could not be directly observed and as a consequence should not be open to scientific investigation.

Behaviorism made many important contributions to psychology because of the directness to the object of study, especially with regard to the development of rigorous experimental methods, and on a fundamental level, the investigation of direct stimulus–response relationships was a very good and rigorous psychological science.

As the decades passed, however, the shortcomings of behaviorist approaches became more and more apparent.

During the heyday of behaviorism in America, Europe took a very different approach to understand human psychology. In Vienna Sigmund Freud went beyond what could be directly observed. Trying to treat patients with neurotic disorders, he developed his theory of psychoanalysis, which drew attention to the possibility that a lot of human behavior has its roots in the unconscious mind. According to Freud, complex hidden drives determine human thinking and behavior. Mental disorders consequently began to be explained as having their roots in unconscious mental processes, involving for instance the repression of traumatic early childhood memories, rather than simple stimulus–response processes. Behaviorism had little to nothing to say about these ideas that very quickly caught the public imagination.

The main reason for the demise of behaviorism as the dominant approach in psychology however was the invention of a new technology.

New technologies tend to captivate scientists influenced by the metaphor of the brain as a machine. The history of the scientific study of the brain in fact reveals that the new technologies of the time quickly became the favorite analogies for how the brain might work.4 In the Middle Ages mechanical clocks and hydraulic systems were the most advanced technologies, and scholars likened the functioning and structure of the brain to clockwork and hydraulic power. The invention of the telegraph and the discovery that nerve cells respond to electrical stimulation in the nineteenth century led scientists to consider the idea that the brain works like a telegraph network. The discovery that brain networks consist of neurons and synapses led to thinking about the brain as a flexible telephone switchboard.

In the mid twentieth century the theoretical foundation stones were laid for another technology that would change the world. The American mathematician Claude Shannon was one important influence in this turn of events that would eventually give rise to the cognitive revolution. Shannon discovered that electrical switches can implement formal logics. Incredibly, he did not reach this insight as a culmination of a life’s work; this was his master’s thesis at MIT in 1937. The psychologist Howard Gardner called Shannon’s master’s thesis possibly the most important, and also most noted, master’s thesis of the twentieth century.5

Shannon showed that electrical switches could solve Boolean algebra, a relationship with huge potential for real-world problem-solving that had not been noticed before. Boolean algebra is a form of algebra in which the truth values true and false, typically expressed as 1 and 0, are the values of ‘measurable characteristics that can be changed,’ so-called variables. Boolean algebra also makes use of operations: and, the conjunction, or, the disjunction, and not, the negation. By showing that electrical switches can solve Boolean algebra, Shannon hence demonstrated that they can implement a formalism for binary logical operations.

The discovery that electrical circuits can implement formal logics became the underpinning bedrock of all computers and ushered in the digital age. The invention of digital computers, operating on data expressed as binary codes, the digits 0 and 1, turned out to be so beneficial because of their immense value as information-processing tools. Computer scientists realized that it might be possible to describe any process in the entire universe as a change in information processing involving several subprocesses including acquiring, inputting, manipulating, storing, retrieving, outputting, and communicating of information.

It is not surprising, given this grand theoretical interpretation of the world in terms of the capacity to process information, that scientists rapidly became aware of the many potential similarities between human minds and the new machines. The notion that human minds, just like computers, do not just respond to environmental stimuli but may process information gained popularity and started to be taken seriously by an increasing number of scientists.

The linguist Noam Chomsky for instance claimed that language, similar to the functioning of a computer, is governed by unconscious, abstract rules that enable language users to understand and produce grammatical language. Chomsky also proposed that children have an innate mental capacity, a biological endowment, to acquire language rather than the ‘empty vessels’ in which language had to be put in as the behaviorists claimed.6 Chomsky’s description of language in information-processing terms in line with the computational theory of mind – the notion that the mind is a computational system – resonated with many behavioral scientists. The suggestions of psychologist Ulric Neisser that humans can only interact with the outside world through intermediary information-processing systems that mediate, for example, between sensory input and motor output,7 appealed similarly to a new generation of scientists studying human behavior.

Psychology as a consequence of this shift in theoretical framing became all about understanding how people encode information from the environment, and store, retrieve, process, and manipulate it. Cognitive psychological science was meant to explain all of mind and behavior as a consequence of human information processing.

Brain science became all about how brains receive informational input, process that information, and deliver an informational output. The idea took hold in researchers studying the nervous system that neurons in the brain contain some kind of information-transmitting neural code, similar to the succession of dots in Morse code.8

The computer metaphor hence caused an important shift in the sciences of the mind: a shift away from trying to understand human behavior as a response to something happening in the surroundings of the human being, the stimulus, and toward understanding human behavior as a consequence of human information processing. The conception that minds, just like computers, process information became the dogma of cognitive psychology, and the idea that neurons transmit information the central tenet of the new field of cognitive neuroscience.

1.2 The Predictive Revolution

The beginning of the third millennium, starting in the early noughties and increasing in strength throughout the 2010s, has seen another large shift in theoretical focus in the mind sciences. In what might be called the predictive revolution or the predictive turn, many researchers in the psychological and brain sciences have come to consider the human mind a ‘predictive engine’ or ‘prediction machine.’9 Like its predecessor, the cognitive revolution, more than half a century before, the predictive revolution is grand in ambition. It tries to explain all mental processes within one common framework. In this unified theory, the functioning of the mind is no longer best explained as an information processor: Minds have become prediction systems.

The view that prediction explains how the human mind works is not as new as the current hype may suggest. The notion has been around for about a thousand years, but it was the German physiologist Hermann von Helmholtz who first articulated clearly that the human brain makes use of predictions to understand the world.

In 1867 von Helmholtz coined the term ‘unbewusster Schluss,’ which translates to unconscious inference, to describe how humans perceive their surroundings. Von Helmholtz came to this conclusion when he studied the human visual system and noticed that the human eye was quite ill-equipped to produce high-quality images of a stimulus. He thus proposed that perceptual sensations are not direct copies of the stimuli but constructed correspondences with the stimuli that the human observer arrives at through repeatedly learned unconscious inferences.

Von Helmholtz investigated several characteristics of human vision that were consistent with this conclusion. Stereoscopic vision – our ability to use the two overlapping visual images of our two eyes to create three-dimensional vision – told him that the two different images projected to the retinas of the left and right eye are resolved into one image by the brain.

The brain also has to adjust for size distortion caused by perspective, for example in forced perspective photos when objects appear to be closer, further away, smaller, or larger than they actually are such as an elephant on top of a person’s hand or someone holding the leaning tower of Pisa.

Von Helmholtz also noticed that we learn the spatial arrangement of objects by moving our fingers over the object. All this taken together, von Helmholtz argued, suggests that perception is not an inborn ability like breathing, it is not a direct perception of a stimulus, it is learned prediction.

The learned behavior of predictive perception, he reasoned, is determined by the properties of both our sensory organs and our past experiences. For example, the fact that we have two eyes instead of one inherently shapes stereoscopic vision and hence the types of predictions we make, just as our prior exposure to the world shapes the inferences we make. In other words, when we perceive something, we rapidly make unconscious guesses about what it might be rather than relying only on our sensations, though von Helmholtz admitted that it is not straightforward to distinguish what perceptual content might be contributed by predictive processes and what by the sensory input.10

Despite von Helmholtz’s influential views about inferences in perception, scholars in the mind sciences have not strongly advocated for predictive processing in the mind until recently.

One reason for this appears to be that, although cognitivism strongly rejected behaviorist views, it nevertheless carried over some substantial baggage from the earlier behaviorist frameworks about a linear progression from stimulus to response during human behavior. Cognitive theories, especially in the early days, often hypothesized a similar serial progression of processing. Information flow was typically described as unidirectional from lower to higher levels of processing.11

Information processing during face recognition, for example, was postulated to proceed in distinct, successive, and sequentially organized stages from low-level sensory input to eventual high-level face recognition without any possibility of feedback from higher to lower levels.12

Such a sequence of processing, starting from the incoming sensory stimuli and working strictly upward for the brain to interpret these signals to finally consciously recognize a face, is often called bottom-up processing. During bottom-up processing, the sensory input at the retina of the eye of the observer is the entry point, with the brain putting together all the incoming input and transforming the increasingly complex information in successive stages in systematic ways until there is some form of match with a face stored in memory and the now recognized face is outputted.

This strict input to output order is a reflection of the view of humans as information processors. It is a hallmark of the computer analogy of the human mind in cognitivism: inputting, storing, retrieving, and outputting data. The bottom-up process can thus be described as ‘what you see is what you get,’ or, to use the computer metaphor of the cognitive revolution, purely data-driven.

The predictive revolution, however, suggests an alternate route of facial recognition relying on what is often called top-down processing: the reliance on existing knowledge and prediction to aid recognition of a face.

A striking example that prior knowledge and expectations can shape what we perceive is a phenomenon called pareidolia, the tendency to see an object or perceive a meaningful pattern where there is in fact none. Examples of pareidolia include when people believe they see a man in the moon or the face of Jesus on a piece of toast.

In line with von Helmholtz’s notion of unconscious inference, pareidolia suggests that our typically subconscious expectations can affect what we see. It is called top-down processing because such phenomena are typically interpreted as the brain sending ‘down’ general information to the sensory system analyzing the stimulus about what a perceived stimulus is likely to be. This process is usually framed as higher levels not waiting for bottom-up processes to be completed. In other words, low-level analysis of the basic visual features of the stimulus has not yet been completed, but higher levels already ‘impose’ the interpretation that it is the face of Jesus on the toast in front of us.

Pareidolia is not restricted to vision; other examples are when someone believes they hear hidden messages in music recordings played in the reverse, or voices in random noise. In accordance with the predictive revolution in the mind sciences, phenomena such as pareidolia are no longer interpreted as purely abnormal psychological processing. Rather, they are seen as consequences, albeit striking ones, of how the mind anticipates ongoing input.

1.3 What Are the Reasons for the Predictive Revolution?

It is an interesting question as to why the paradigm shift toward explaining the mind in terms of prediction has occurred in the last few years. The predictive turn is not a consequence of an analogy of the mind with yet another new technology. What then has changed in the mind sciences that provided a fertile ground for the predictive revolution to occur?

One reason has been the growing realization in the mind sciences that there are severe processing bottlenecks during human information processing. Our brains get bombarded constantly with a huge amount of information from our surroundings. Even the hundred billion neurons and hundred trillion synaptic connections in the human brain cannot handle this all at the same time.13

Prediction is a good potential solution to this bottleneck problem because it can do the heavy lifting of reducing the onslaught of information to surprising input. Prediction may allow the mind to focus on unexpected information, thereby reducing the severity of the bottleneck or avoiding it altogether.

A second realization contributing to the predictive revolution has been the recognition that perception ultimately supports action. In much of traditional cognitive science, perception had been assumed to be a rather passive, stimulus-determined process in which a visual object or scene, for example, is recognized in a step-by-step bottom-up fashion to build a rich visual model of the surrounding environment. In contrast, researchers increasingly noticed that perception is rarely a passive process, but instead active and often related to the tasks people are engaged in.14 Perception tends to happen in the service of a specific action rather than the construction of a ‘visual copy’ of the outside world in our heads. Prediction here again offers the potential to concentrate on the most task-relevant perceptual information which in turn can result in more rapid and efficient actions.

A third reason is the general adaptiveness of predictive minds if an evolutionary perspective of human behavior is taken. Prediction is useful to biological organisms because a forward-looking mind has distinct adaptive advantages over a less proactive one. In order to survive, biological systems must maintain their physiological states and forms in an often rapidly changing environment. Prediction is crucial for minimizing unpleasant surprises and avoiding predators. Extensive foraging for more and better information for example may increase sampling time with negative consequences for staying between the boundaries of self-maintaining bodily states and predator avoidance. The most successful solution from an evolutionary perspective is often to rapidly predict environmental states to maintain the stability which enables life to continue. Less is often more in the sense that incomplete or ignored data may often support better adaptive decisions than continuing to sample and process information from the environment for extensive periods. Prediction hence may be an evolutionary more relevant goal of processing to biological agents than ‘mere’ information processing.

A fourth reason is the apparent theoretical elegance of predictive mind frameworks in that they may overcome the shortcomings of both behaviorism and cognitivism. In hindsight, at least, it is obvious that the simple stimulus–response coupling explanations put forward by behaviorism do not do justice to the complexities of human mental life and behavior. The strict focus of cognitivism on often rather passive human information processing, however, also falls short as a plausible complete account of human psychology. The mind as a digital computer analogy has generated a huge amount of research and produced many insights but has reached its limits.

The predictive revolution promises to reconcile cognition and behavior as the intrinsically connected two sides of the same coin serving human interactions with the environment.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Talkin’ Bout a Revolution
  • Falk Huettig, Max-Planck-Institut für Psycholinguistik, The Netherlands
  • Book: Looking Ahead
  • Online publication: 20 March 2025
  • Chapter DOI: https://doi.org/10.1017/9781009245470.003
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Talkin’ Bout a Revolution
  • Falk Huettig, Max-Planck-Institut für Psycholinguistik, The Netherlands
  • Book: Looking Ahead
  • Online publication: 20 March 2025
  • Chapter DOI: https://doi.org/10.1017/9781009245470.003
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Talkin’ Bout a Revolution
  • Falk Huettig, Max-Planck-Institut für Psycholinguistik, The Netherlands
  • Book: Looking Ahead
  • Online publication: 20 March 2025
  • Chapter DOI: https://doi.org/10.1017/9781009245470.003
Available formats
×