In 1964, in the shadow of the Cold War, the Austrian economist and political philosopher Friedrich A. Hayek penned an essay called “The Theory of Complex Phenomena,” which argued that in systems of exceeding complexity such as the brain, a financial market, or social interaction, prediction and control were virtually impossible (Hayek [Reference Hayek1964] 2018). Hayek foreshadowed a phenomenon that would soon be labeled “complex systems”—systems where the collective interaction of the parts entails the appearance of properties and behaviors that can hardly, if at all, be inferred from their individual properties. Complex systems are difficult to regulate from without. Because of the interaction of the large number of elements that constitute a system, new patterns and structures emerge in what Hayek in another article entitled “Kinds of Order in Society” described as “spontaneous” order “not made by anybody but which forms itself” (Hayek Reference Hayek1964:5). A spontaneous order might be produced or reproduced through the intended or unintended actions of individuals but its final form cannot be consciously designed: “What we must get rid of is the naive superstition that the world must be so organized that it is possible by direct observation to discover simple regularities between all phenomena” (Hayek [Reference Hayek1964] 2018:349).
Given Hayek’s interest in complex phenomena, perhaps not surprisingly he is also one of the key contributors to connectionism—a theory in cognitive science that explains intelligence using “simplified models of the brain composed of large numbers of mathematical units together with weights [a mathematical set of values assigned to each neuron or bunch of neurons] that measure the strength of connections between the units” (Buckner and Garson 2019). But connectionism also goes by another, more recognized name: neural networks. In 2023, Hayek’s idea of complex systems such as neural networks producing spontaneous order is ubiquitous, in the guise of artificial intelligence (AI)–based mathematical models that write essays, sell new pairs of shoes, or track music preferences. But neural networks are also appearing in a more unlikely place: artistic performance events that use AI to analyze patterns in text, sound, and image to generate new outputs that affect stage action, potentially producing new kinds of interactions between humans and machines. Whether in an experimental “AI Opera” at Lincoln Center in New York that used the AI-based “large language model” (LLM) GPT-3 (Rose Reference Rose2022), a multimedia stage performance at the Ars Electronica festival for art, technology, and society in Linz, Austria, that claimed to be “the first performing arts production starring an artificial intelligence creation as the protagonist” (Ars Electronica Reference Electronica2022), or a dance performance that explores the relationship between a human dancer and an artificial, quasi-living entity embodied in the music (the case study in this article), the stage is increasingly being shaped by computational systems that attempt to mimic or surpass human intelligence and action.
Of course, there is both a growing hype as well as concern surrounding the role of AI in the performing arts (Teampa ˘u 2022; Damiano et al. Reference Damiano, Lombardo, Monticone and Pizzo2019). What we label “Performing AI” not only generates new aesthetic problems but also poses deep epistemological and ontological questions about the neural networks’ “conception” of human beings and society underlying such artistic events. These questions are linked to more fundamental issues of control and power relations forcing us to reconceive how technologies alter modes of perception, action, and practice in live performance.
Our use of the word “performance” here not only references its well-understood definition as a “tangible, bounded event that involves the presentation of rehearsed artistic actions” (Bial Reference Bial2004:59), but also describes the active, “vital materiality” (Bennett Reference Bennett2010) temporally enacted by human and nonhuman actants (as Bruno Latour labeled them [1996]). There is also the concept of performativity lurking in the background; which takes us back to Hayek’s economics coupled with his contribution to the theory of neural networks. It is well known that the term “performativity” careens across disciplinary boundaries, from linguistics (Austin Reference Austin1962) and gender (Butler Reference Butler1988) to the sociology of science (Pickering Reference Pickering1995; Barad Reference Barad2003; see Salter Reference Salter2020a; Velten Reference Velten2012). We situate the performativity of AI, however, in yet another disciplinary framework: that of economics. Described by sociologists, economic philosophers, and political scientists, economic performativity argues that economics is not simply a mathematical description of the world. “[E]conomics, in the broad sense of the term, performs, shapes and formats the economy, rather than observing how it functions” (Callon Reference Callon1998:2). Yet, economics is not only performative in the material actions produced from its models. According to Michel Callon, economics also creates new kinds of social-technical “arrangements” (agencement)—experiments carried out “in the wild” of the world (2007:312).
In a similar way, neural networks can be viewed as performative in Callon’s sense of the word. Originally emerging from military-scientific contexts in the late 1940s as abstracted and reductionist mathematical models of biological brain processes but only computationally viable since the 1990s, Footnote 1 neural networks are not simply descriptive quantitative models of how brain processes—learning, pattern recognition, organization, and classification—function. As we see with each new release of the “Generative Pre-Trained Transformer” (GPT), such models are materialized enactments of the particular concepts, ideologies, and knowledge encoded in them. If this material enactment is so, then what do such performative instantiations of mathematical models of “brains” have to do with labor on and around the stage? Labor is difficult to sever from economics, but in addition to the physical effort required of human performers interacting with these neural net-based entities, there is also the work of programming, adjusting, and tuning the parameters of such systems that eventually allows media and audiences to describe them as “expressive.” Such highly skilled “knowledge work” is framed by Michael Hardt and Antonio Negri as “labor that produces immaterial products such as information, knowledges, ideas, images, relationships, and affects” (2004:65). To further this concept: such creative labor with computational technology is not only immaterial but also “operational,” a take on filmmaker Harun Farocki’s concept of “operational images,” which proffers that “images do not represent an object, but rather are part of an operation.” For Farocki, the purpose of operational images is not to “depict or represent, entertain or inform but rather track, navigate, activate, oversee, control, visualize, detect and identify” (Operational Images n.d.).
Thus, could it be possible that working with neural networks in creative ways actually reconfigures human labor into new and still unknown formations: that of pattern seeking and detection of signals generated by machine processes, just as Hayek imagined humans interacting in the god-like formations called markets? Could it be that the performativity of such events achieves something other than a liberatory “symmetry” of the human and the socio-technical nonhuman freeing us from the modernist split of nature and culture (Latour Reference Latour1993)? Could the reimagination of the stage be a model of complex order and organization, of shifting statistically based signals and patterns where older notions of creative governance, power, control, and human agency must be rethought in relation to a newly emerging generation of computational paradigms involving data and prediction?
Such a reimagining has dramatic consequences for how we understand technologically driven performance processes. It entails displacing long utilized ideas like the “stage as machine” (Salter Reference Salter2010) or as a site of “mixed means” (Kostelanetz Reference Kostelanetz1968), “intermedia performance” (Higgins and Higgins Reference Higgins and Higgins2001; Bay-Cheng et al. Reference Bay-Cheng, Parker-Starbuck and Saltz2015), “digital performance” (Dixon [Reference Dixon2007] 2015), “cyborg theatre” (Parker-Starbuck Reference Parker-Starbuck2011), or even more recent ideas of the algorithm (Morrison et al. Reference Morrison, Nyong’o and Roach2019) in favor of an understanding of performance anchored in spontaneous organization, dynamics, and complexity where no sole human entity (actor, director, choreographer, designer) actually steers the overall event. In other words, if the performative effects of human creators responding to the actions of neural networks can be seen as a microcosm of Hayek’s larger reimagining of human interaction as “characterized by uncertain outcomes, limited knowledge and limited agency” (Slobodian Reference Slobodian2018:232), then it is less the case that machines will become performing artists than that performing artists will become more like machines, responding to signals and prompts, patterns and structures in order to create something.
Complexity over Control or Prediction as Agency
In the 2019 TDR issue dedicated to “Algorithms and Theatre,” Ulf Otto asks what the “place of performance is in the societies of control” (2019:134). The “societies of control” refers to a late essay by Gilles Deleuze (Reference Deleuze1992) in which he argues that we are in a transition from sovereignty, represented in technological terms by clockwork and disciplinary structures, which is foregrounded by Michel Foucault’s discussion of Jeremy Bentham’s panopticon (1977:195–230). Instead, the computer, through its supposed malleability, modulation, and deformation, sets up a new kind of episteme, instituting the transformation of individuals into “‘dividuals,’ and masses, samples, data, markets, and banks” (Deleuze Reference Deleuze1992:5). In the same TDR issue, Pizzo, Lombardo, and Damiano state that “in [traditional] experiments of digital intermedial performance the most important outcome is usually the live artwork produced” (2019:24). In practice, this translates to the precise structuring and programming of algorithms in the machines running the performance and the exact placement of cues in time. These efforts are made to ensure a total control of the sequence the audience experiences; in other words, a “choreography.”
The use of algorithms, of course, complicates such purely human-driven approaches to artistic control. Algorithms in theatre or live performances, it is claimed, hand over the task of sequencing and organization to machines, as in Otto’s description of a Berlin performance by the group Turbo Pascal where the audience was continually physically reorganized and reclassified through a sorting algorithm; or Annie Dorsen’s A Piece of Work (2013), a deconstructed “machine made Hamlet” whose text, lighting, sound, and scenographic sequences were determined by the working of computational procedures such as Markov Chains (Dorsen Reference Dorsen2019). Footnote 2 In fact, Dorsen’s work could be seen as a more computationally sophisticated version of John Cage and Merce Cunningham’s use of chance procedures. For Dorsen (and others using a similar process), the computer makes decisions instead of the I-Ching or throwing dice; human creators possess the knowledge of the larger aesthetic system in order to organize creative decisions in a dramaturgically compelling way. In other words, the algorithms are machine-organized procedures that make (sometimes) seemingly random decisions in specific combinatoric sequences.
The algorithms that constitute neural networks, however, are different in that they are based on data predicting future actions. That is, they are models that adapt, error correct, and improve their differentiating ability over time through already existing data that is used to train these models in order to show the machine what it needs to identify. In what is called deep neural network-based learning, the core element is thus not rules but “predictive power over human interpretability” (Jones and Wiggins Reference Jones and Wiggins2023:248). Moreover, in the area of research called generative machine learning, the machine not only identifies existing patterns but also produces new ones based on already existing statistical distributions of data. In this context, certain patterns of order emerge or “self-organize” based on the net’s (and the human programmer’s) detection, tuning, and observation of existing patterns within a system. As computer scientist and artist Sofian Audry describes it,
Machine learning suggests a different way to deal with self-organization, in which one assembles different ingredients (data, model, training process) but lets the emergent system find its own way to achieve its goals, hence handing more power to the machine. (2021:16)
By incorporating machine learning processes that utilize neural networks to uncover existing and predict and generate future patterns in live events, the labor of human stage workers behind the scenes shifts away from designing computational procedures and conditions that unfold through specific human ways of knowing (planning, programming, cueing). Instead, the expressive artistic act of programming for a technologically oriented performance in which AI is an actant has to be reimagined as modeling—choosing the right model, continually observing its output, changing its human-accessible parameters (called hyperparameters), and painstakingly readjusting it in order to produce certain patterns. These adjustments have a significant impact on the behavior of the system, with new structures of organization emerging from the trained neural network. While some artists retrain the nets during the performance, the process of training can be cumulative and iterative, enabling the neural network to reach complex behaviors capable of exhibiting different self-organized patterns using the same training during the performance.
This operational process is thus not only that of representing already captured data in some kind of audiovisual form (like traditional stage practices that use media), but also collectively overseeing, training, and harnessing the capabilities of these systems to generate a coherent aesthetic presentation. In other words, the modes of organization (i.e., patterns) generated by neural networks continuously demand intervention by human interlocutors so they don’t simply appear as noise (random, without structure) to creators, performers, and audiences alike. Patterns can “emerge” from a series of interactions among the simple components of the system—human bodies. The machine outputs generated without a central design have to be displayed as images, sounds, or in any other kind of humanly perceivable medium. Patterns can also “learn” from experience how to make better patterns. All this suggests that the stage is a complex system in which modes of interaction are not about the choices or will of individual agents but instead the entire web or network of connections.
It is not by chance that computational neural networks used in artistic performances catalyze thinking about larger issues of feedback, spontaneous order, and organization. After all, the very concept of a neural network is already historically rooted in the interdiscipline of cybernetics: the science of control, organization, and feedback. A term coined by mathematician Norbert Wiener in the 1940s and originally focused on the internal regulation of systems via feedback, cybernetics returns or “feeds back” into the system information output from the system in order to affect its actions or goals. Footnote 3
While relatively short lived, cybernetics had major influences across dozens of fields. Roland Barthes, for instance, famously called the theatre a “kind of cybernetic machine” ([1963] 1981:258). But it is the sociologist of science, Andrew Pickering, who argued that cybernetics (particularly what emerged in Britain after World War II) creates what he labels “ontological theatre” (not related to director Richard Foreman’s concept of an ontological-hysteric theatre): the staging of machines that “threaten the modern boundary between mind and matter […] in which people and things are not so different after all” (2010:18). In other words, cybernetics is a performative practice that dispenses with prediction and control, replacing these fundamentally “modern” notions with concepts like open endedness, complexity, and the temporal evolution of systems in an “always-surprising world” (24).
Early cybernetic research famously driven by Wiener explored weapons like antiaircraft missile firing systems that merged human and machine, based on each element carefully tuning, self-regulating, and correcting the other through a process Wiener identified as “negative feedback.” This model of human-machine, what is called a “servomechanism” in engineering, has consequences for how we conceptualize human beings—“none other than self-correcting black-boxed entities” (Galison Reference Galison1994:264). But another area of interest among physicists, mathematicians, psychologists, and engineers that constituted the cybernetics mindset was how neural structures in the brain could also be mathematically described as self-organizing entities. Argued in a landmark 1943 paper in the Journal of Mathematical Biophysics by psychologist Warren McCulloch and logician Walter Pitts—before the actual realization of the first working digital computers—the electrical firing of a physiological neuron was reconceptualized as something akin to a logical modeling system; an all-or-nothing set of switching models that, when different neurons were combined, could produce logical propositions (McCulloch and Pitts Reference McCulloch and Pitts1943). McCulloch and Pitts’s work soon influenced other researchers, from the Canadian psychologist Donald Hebb ([Reference Hebb1949] 2002), who conceptualized neurons learn based on how they change the strength of their connections with other neurons (via electrical and chemical synapses); to the American psychologist Frank Rosenblatt, who in 1958 built what is considered the world’s first working example of a hardware-based neural network, called the Mark I Perceptron (Rosenblatt Reference Rosenblatt1958).
The cluster of researchers conceptualizing neural nets also included Hayek. Known as a Nobel prize–winning economist who was a key force in the group of political theorists and economists who established the foundations of neoliberalism, Hayek founded the Mont Pelerin Society in 1947 to champion free markets. Neoliberals champion a society shaped by market-driven behavior, exemplifying what historian Quinn Slobodian claims are long running aims to redesign “states, laws and other institutions to protect the market” (2018:6). Neoliberals want to reshape “extra economic conditions for a free economic system,” including a controversial separation between capitalism and democracy (6–7). Indeed, to get a sense of neoliberalism’s continued pervasive influence, one has only to look at a wide range of examples from Hong Kong and Singapore to Dubai, post-Brexit UK, the USA, and a multitude of global corporations espousing libertarian-based economic “solutions” for addressing climate change, such as geoengineering and carbon credits or data-mining-based interventions into democratically held elections (Slobodian Reference Slobodian2023).
Hayek also had a life-long interest in psychology, developing (independently of Hebb and Rosenblatt) the concept that networks of neurons in the brain could function as a kind of classification system, which is a fundamental concept in supervised machine learning. This theory is not just scientific. Hayek proposed broader epistemological connections between mathematical models of brains and larger questions about the complex organization of society. Footnote 4 In his 1952 The Sensory Order, Hayek argued that consciousness of events in the physical world is based on a network effect. The sensory order is constructed from the neuronal connections classifying information; objects external to the mind have no intrinsic properties except how the nervous system classifies these properties. In no uncertain terms, “we live in a sensory order that is created by the central nervous system” (1952:844). Hayek also put forward the notion that memory was not only a function of present connections but also past links between nerve bundles, that is, between neurons. Here, the brain can be seen as a classification machine that constructs reality rather than simply interpreting it.
What is more extraordinary is that Hayek’s model of the brain is also that of a distributed computer (many computers working together) whose operations are essentially unknown to the user: a cybernetically structured assemblage composed of “hierarchies of systems of classifier algorithms, which were opaque to the thinker […] but regularly made use of in order to interact with the environment” (Mirowski and Nik-Khah Reference Mirowski and Nik-Khah2017:680). It is here that Hayek extended his work on the brain into larger questions about order and organization. Order signifies an arrangement of specific “relations between the parts according to a preconceived plan”; in other words, an organization (Hayek [Reference Hayek1964] 2018:4). But Hayek also discusses a “spontaneous order”; one “which is characteristic not only of biological organisms […but also is] not made by anybody [and] which forms itself.” Spontaneous orders are governed by rules (even though those rules may not be directly known by the elements that are affected by them) and determined by conditions that may be previously established. In other words, a spontaneous order might be produced or reproduced through the actions of individuals but its final form cannot be consciously designed, according to Hayek ([Reference Hayek1964] 2018). Examples of Hayek’s spontaneous order range from social systems to the operation of markets and political structures. In this sense, the market is a “black box” like Norbert Wiener’s human-machine weapons or the brain—a decentralized set of signals whose totality is not known by any one agent. Its overall functioning is only known by its god-like self.
In this idea of the lack of information to inform decisions we can see how cybernetics had a profound effect on Hayek. According to researcher Gabriel Oliva Costa Cunha, not only did cybernetics shape Hayek’s thinking about the brain as a self-organizing system, but it also radically influenced his larger epistemology of how humans operate within emergent “systems where the information possessed by the whole is dispersed among its numerous parts and in which each part could not possibly grasp all the knowledge of the whole” (2015:23). Oliva argues:
In both systems, the mutual coordination of the parts (neuron or individual) is reached not by each part’s explicit mastery of a large amount of information of the system (brain or society), but by the tacit use of information implicitly conveyed by the operation of the rules that constrain the relationship between the parts (such as the structure of neural firing paths and the price system). (23)
This concept of imperfect or partial information thus builds a strange connection between Hayek’s notion of order and neural network performativity. For Hayek, markets are not only a description of transactions but actual material enactments of ways that society should organize and govern itself: markets perform. Hayek’s performative way of thinking is emphasized by philosopher of economic thought Philip Mirowksi:
The stress on complexity and the inability of any individual human to really know a phenomenon with any certainty; the insistence that systems of lesser complexity are impotent to control those of greater complexity; the existence of “a Plan far superior to anything that an individual can devise”; the postulate of a scale-invariance of the information processor from inanimate object to brain to the marketplace; the insistence that “there is no such thing as society” through the blurring of the distinction between human and nonhuman. There is no more prominent social theorist of the “dance of agency” and “performativity” than Hayek. (2012:194–95)
The neoliberal charge on the posthumanist move of blurring human-nonhuman not only in the social sciences and humanities but also in performance studies might seem unfair. Footnote 5 Yet, if Hayek’s claims that neural networks in their classificatory power are similar to the workings of markets, then such market design aims to performatively enact and not simply describe an epistemology of how society (or its microcosm, the stage) should run. In other words, Hayek’s turn towards complex systems in the brain was soon generalized into more profound epistemological questions about “the use of knowledge in society.” If individuals cannot grasp the workings of complex systems because their knowledge is, as Hayek famously claimed, “not given to anyone in its totality” (1945:520), then central control, planning, direction, and regulation of such systems is impossible. While Hayek thus critiqued the idea of planning and prediction, the kind of prediction that he criticized was that of central economic planners and government regulations. In contrast, the predictive power of current data-driven algorithms is distributed, complex, and stochastically shaped.
What then is the role of human beings and their labor in artistic performances reenvisioned as complex systems? Hayek’s radical move was to argue that the economic problem of organizing markets was that of a “knowledge problem” from inside: how individuals who operate in a complex system do so with limited or “imperfect” information and thus self-organize without any global mechanism in order to compete with each other. Because such models are too complex to observe the individual workings of their interactions and are therefore inherently “unknowable,” the plan for central control and governmental regulation is bound to fail. Therefore, the question of governance becomes less about exerting control or imposing discipline on individuals and corporations than grasping the churning dynamics of the complex system and how it produces and shapes the specific collectivities that operate within it. Individual agency, whether human or machine, is continually reorganized through the politics of the larger system; individuals within such a system become less than creators imposing a certain vision on the whole. Instead, Hayek described those individuals within the market as pattern seekers, trying to understand, interact with, and react with/to their limited knowledge of the changing structure of the market’s organization.
One could return to Barthes’s statement about theatre as a “cybernetic machine” sending out a multitude of signals that both creators and audiences struggle to make sense of (Carlson Reference Carlson1993:491). Viewing the stage as a complex system also challenges decades of director-auteur theory (what in the German-speaking world is still known as Regietheater) in which the director, dramaturg, choreographer, or even computer in the case of “algorithmic theatre” generate commands as all-seeing eyes exerting total control. The complex system of intertwined and imperfect information decenters on the one hand any actants (including the control system of the computer) as loci of control and on the other, the ability of the elements (including humans) to respond to actions on a global level generated in the system by all of the actants. Footnote 6
Because it is impossible to have total knowledge about the workings of a complex system, both creators and performers seek partial information and patterns generated locally by these systems. Moreover, performing with neural networks demands an increased degree of attention to the unfolding of events in time since “connectivity [is] inseparable from its history of transformation” (Varela Reference Varela1992:245). In other words, the temporal patterns generated by the network thus reinforce connections that can generate new forms of behavior. This reacting to the history of a system is something that machines are good at but humans less so, especially when it comes to understanding how complex patterns unfold over time. It might seem strange as a creator to lose control of a system that one has painstakingly setup to achieve an aesthetic effect. Yet, the actual tuning of such supposedly automated systems is in itself a kind of bureaucratic-administrative work, particularly in creative contexts where the reigning idea, as AI researcher/artist Memo Akten argues, “is how the system might surprise oneself” (in Audry Reference Audry2021:220).
The concept that AI shifts work towards a bureaucratic framework is not new. As the historian of AI Jonathan Penn writes, AI’s automated origins are more based in “post-war American administrative logics” than an interest in “neural dynamics” (2020:15). At the same time, as artist-scholar Hito Steyerl points out, labor in contemporary AI is anything but automated. On the one hand, claims of automation mask the unpaid or nearly unpaid labor in the developing world whose job it is to label images for generative AI systems like Stable Diffusion designed to produce new images from natural language prompts or to “content manage” the most egregious and violent social media posts. On the other hand, the days of immaterial digital labor are numbered because digital professionals will be forced to “upgrade by renting services built on their own stolen labour in order to remain ‘competitive’” (Steyerl Reference Steyerl2023:25). Steyerl’s argument, however, is focused mostly on production, not on the ontological and perceptual shifts that may take place as we increasingly distribute our own ability to create machines. But if we take Hayek’s models of neural, human, and market as similar forms of interaction at different scales seriously, then clearly the multitude’s immaterial labor can also be reimagined; predominantly that of pattern recognition involving rapid response to signals and predictions that the machines spit out before humans can do the same. Creative labor may thus be reimagined through the foundational history of neural networks themselves in which mathematical models “swapped authority of centrally determined routines like Aristotelian formal logic for a different sort of authority, namely a set of decentralized determined routines” (Penn Reference Penn2020:110). In this case, labor operates in competition, not collaboration with others.
There are several important points to summarize here: The first is that neural networks are less about centralized modes of control than they are about histories of connections and memories of those connections. Second, in their computational framing, neural networks are examples of complex systems where it is nearly impossible to grasp the workings of the entire system. The unit of agency is thus not the individual neuron but the assemblage—the net. Third, neural networks are performative not only in their operations but also in their instantiation of certain epistemic models of how the world works: central control gives way to self-organization as a new form of structuring power and producing human and machine subjects. Fourth and most importantly, neural networks’ historical links to neoliberal ideologies of the unknowable self-regulating flow of information in markets tends to make them contestable objects. This is especially true in the context of art performances that often aim to critique neoliberal forms of power and governance by using the same systems to make art. This art often is, or includes, a political critique pointing out the cultural, racial, and gender inequities of biased datasets. In this sense, it might be that artists working with neural networks in artistic performance and Hayek’s theories of order, organization, and human as mere elements in vast unknowable systems, like iron filings subjected to magnetic fields, have much in common with each other.
Digital Otherness
On a cold Montreal night in February 2019, the public enters a space that bears little resemblance to a proscenium or black box theatre. The technological apparatus that makes the performance possible is exposed; there is no backstage for the performer or technicians to hide. Furthermore, the lighting designer is sitting almost two meters above the ground on one end of the room at a perfect vantage point to clearly see every move from the dancer. In this peculiar arrangement of technological and theatrical apparatus, rows of spectators are facing one another on either side of the room.
As the lights dim for the premiere of Altérité Numérique, dancer Myriam Arseneault-Gagnon wanders in the space looking for something or someone. The music slowly builds, introducing strange electronic drones and polyrhythms that transform as the quality of the movement, such as speed and direction, changes. After some time, clearly perceivable melodies, harmonies, and rhythms emerge. The dancer identifies the coherence that is happening in the music and starts to dance to the rhythm, almost playfully transforming the sounds with rapid movements. The music suddenly changes to an eerie register, with sounds like whispers and sirens. An immediate reaction from Arseneault-Gagnon follows as she modulates the quality of her movement and facial expressions, mimicking a sort of broken mechanical doll. The playfulness is over. The sound does not react in harmony with her movements anymore. While the first few minutes of the piece gave an impression of control, almost like an augmented musical instrument, the performer now seems at the mercy of the technological system.
Noticing a momentary break in the music, lighting designer Benoit Larivière fades out the spotlights on the dancer, leaving the room in almost complete darkness and silence. A distorted high-pitched sound suddenly is heard, like a shortwave radio picking up a faint signal. As the sound fades in, a dim red light illuminates the center of the stage where Arseneault-Gagnon stands, almost immobile. As if possessed by the distorted high frequencies, her entire body contracts and as it does, the sound’s amplitude lowers. At each sign of a relaxation in her muscle tension, the distortion picks up again triggering an intense reaction in the dancer’s body, decreasing the intensity of this noise. Soon, Arseneault-Gagnon is lying on the floor fighting to keep the strange noise under control, expending her remaining energy to stop the distortion from flooding the room. At this moment, the audience senses the relationship between the dancer and the invisible artificial being. The feeling of struggle and the dancer’s visceral reaction to the sound transformations is palpable. It is unclear who or what is in control of the performance, if at all. After long minutes, the sound fades out, giving Arseneault-Gagnon a chance to relax her body. The fight is over but not without leaving the dancer exhausted from the effort.
Performing with neural networks represents an enactment of Hayek’s theory of complexity. In Hayek’s epistemology, economic actors use signals such as pricing to detect patterns and make exchange decisions based on incomplete information. In Altérité Numérique, Arseneault-Gagnon’s task is to detect sensory patterns in sounds generated by the machine learning system in order to create a choreographic improvisation. In this context, both the human and the machine become pattern seekers relying on sensory stimuli or digital data to determine their action contributing to the formation of new patterns. From an aesthetic standpoint, the performance work thus emerges from the complexity in this web of patterns, identification, and actions. The development and performance of Altérité Numérique reveals the way neural networks’ performativity transforms the creative process in live art. This transformation is twofold: As described by Hayek, incomplete knowledge of the system makes its full understanding impossible, requiring the human actors/creators in the system to become pattern detectors.
In creating Altérité Numérique, the artists we interviewed intended to go beyond probabilistic and chance-based music and movement used by composers like John Cage or Brian Eno and dancer-choreographers like Merce Cunningham. Altérité Numérique’s artists wanted to tap into the pattern recognition ability of neural network-based machine learning. To do so, they used an “unsupervised” algorithm (data that is not already categorized or “labeled” by humans), analyzed existing classical music scores, and created clusters of musical notes that were found to be closely related to each other in the musical pieces. These clusters were then organized into a 2D map in which each point of the map referred to a group of notes that could be used for the composition. During the performance, an artificial agent unaware of the larger context it was operating in was then able to pick a specific point on the map and select musical notes from the associated cluster, thus transforming coloration of the composition during the performance and providing new improvisational triggers for the performer. Patterns of movement recognized by the agent triggered various changes in the music. These changes affected the score and the rendering of the score through sound synthesis and included: (1) a new point selection in the current map; (2) a new map and the selection of a new point; (3) a change in the sounds produced by the synthesizers; or (4) a change in the interactive mapping between the qualities of movement and the electronic sound timbres. The reason behind the integration of this complex architecture involving multiple machine learning algorithms was to create a distribution of control over the piece and to observe how, in the course of the performance, this control could be exchanged.
When integrating neural networks, the objective is not to program every neuron and every connection. This, in fact, is the role of the algorithm itself. In Altérité Numérique, data from pixel values, movement, and musical notes were used to identify patterns and modify the various connection strengths between neurons so that the resulting configuration is translated into performative actions drawn from the computer. The artists have neither complete knowledge nor control over the direct actions taken by the neural net at any given moment. For Larivière, working within such a system involved preparing for situations that might emerge during the performance and relying on knowledge and experience of his tools (light console and stage lamps) to react to human and machine actions onstage.
Using networks of interconnected artificial neurons able to learn by adjusting the strength of their connections (weights) from data transforms the traditional mapping between input and output imagined in cybernetics and found in procedural computing. In complex systems, the “larger structure as a whole will possess certain general or abstract features which will recur independently of the particular values of the individual data” (Hayek [Reference Hayek1964] 2018:336). Thus, observing the code and variables of a complex system is basically irrelevant to understanding its macro behavior. Instead, observation of the system’s behavior, in this case through music, attempts an embodied understanding of the system’s action and the identification of patterns otherwise invisible to the human performer. As explained in a postrehearsal interview with Arseneault-Gagnon: “being exposed [to the system] helps! I feel like the more I do it the more I discover things. It takes me to places I didn’t think of every time. There’s a system, I know, but because it’s organized differently [it] really takes you somewhere else” (2019).
Information about a system is directly linked to the control one has over that system. In theatre, the lighting designer usually knows exactly what intensity a light should be at any given moment and a dancer usually knows what movement they must do at any given time in a performance (there are, of course, exceptions). This information enables a visualization and prediction of the outcome, therefore exerting almost complete control over the work. What can be known about the state of the performance over time, however, if it becomes impossible to map specific states of a complex system (input) to an action or decision (output)? Indeed, only the general conditions required for different types of action to emerge can be known. Thus, when thinking of artistic performance itself as a complex system, understanding these general conditions requires an embodiment of the algorithm through sensible material like sound or light, and a sensorial experience of the system in context. When developing Altérité Numérique, the impossibility of accessing perfect information from the system to predict its outcome proved a formidable challenge, which the artists had to adjust to. Since the program was tasked with changing multiple parameters in the performance, from the melodies to the type of sounds and sound modulations, an initial strategy was for the developer to verbally inform Arseneault-Gagnon of the changes in the system (as they appeared on the computer screen) to help her understand the system’s actions. But this was discontinued. Arseneault-Gagnon talked about it in a discussion after a rehearsal session: “this time we could not talk, and that enabled me to be in the present moment and avoid trying to reproduce moments that happened in past performances […]. I must trust my body, the memory in my body, instead of an intellectual memory” (2019). In other words, thinking about the state of the system from a computational perspective made the decision-making process more challenging for Arseneault-Gagnon as she handled in real time the formidable task of responding to a system whose main goal is the generation of new patterns through continuously new data distributions. Relying mainly on bodily sensations that arose in response to the system thus enabled the performer to freely use the sonic transformation and musical composition generated by the system as improvisational triggers. Yet, providing Arseneault-Gagnon with information related to the state of the system clearly conflicted with her embodied experience, thus hindering her capacity for improvisation.
How then can the rehearsal process and the creative labor of performance artists adapt to the use of neural networks? While creating Altérité Numérique (2019) as well as another work in collaboration with choreographer Axelle Munezero called Temporalité Expressive (2017), rehearsal processes mainly centered around exploration and observation to try and grasp the interactions between different entities in the unfolding event. In traditional dance and theatre creations, improvisation and exploration are important parts of the creative process, however, they represent only the initial portion of the development. Referring to her choreographing methods for traditional dance performances, Temporalité Expressive choreographer Munezero stated “when I have the different [choreographed] movements, I then need to place them in time to create the narrative curve” (2022). One can infer that rehearsals are thus traditionally used to erase every variation from one performance to the next in order to ensure a perfect control on all elements of the stage, humans, and machines.
When working with complex systems, the improvisation and exploration in the creative process becomes the core of the rehearsal work. Arseneault-Gagnon: “[what the public sees] is an improvisation and interaction, but all the steps [of exploration in rehearsal] before are necessary because that’s what creates the complexity and intention [in improvisation]” (2019). The performer’s labor in this context becomes “operational” in Farocki’s sense. Movements realized in rehearsal do not specifically represent the final outcome of work, but are necessary actions to enable the system, composed of humans and algorithms, to operate. Much of the rehearsal time with the system served to gather movements to train machine learning algorithms on new musical sequences and movement patterns, and as a means for the performer to make sense of the generated musical content from the machine itself. The labor of creation switches as more time is spent trying to understand the patterns that nonhuman entities produce and learning to respond to them. In such a rehearsal process, control is reorganized and human creativity is not replaced by the machine but instead constrained to giving the performers an embodied form of understanding of the behavior of the machine. At the same time, practice with the system enables learning about the system’s actions as improvisational triggers. This then becomes the renewed challenge of the choreographer’s labor: to help dancers interpret the ultimately unknowable actions of the machine in a system in which the emergence of the performance itself is unknown.
Performing Neural Networks as Ontological Theatre
To understand the effects on human artistic labor of rapidly introducing neural network–based machine learning into contemporary performance contexts, we have taken a rather circuitous route through the history of cybernetics, neural networks, complex systems, and neoliberal concepts of knowledge. This is clearly what the sociologist John Law has called a “mess”: when the social-technical-aesthetic-political-economic aspects of the phenomenon of “performing AI” meld and crash into each other, making it difficult to disentangle one from the other (Law Reference Law2004). So be it. This is, after all, the way that those working in the field of science and technology studies understand how the “social world is inscribed into technology in the processes of its making and use” (Salter et al. Reference Salter, Valérie Burri and Dumit2017:139).
Neural networks, however, are not just a mathematical concept. They are also the realization of Pickering’s ontological theatre, which stages “a vision of the world as populated by systems and entities having their own dynamics and that interfere with one another performatively, on the level of doing rather than knowing” (2007:13). Yet, in Hayek’s vision of the workings of neural networks, this lively, performative world is one of limits and constraints in which humans react to patterns that one can only locally understand and respond to. If these descriptive models of neural processes in mathematical terms thus instantiate performative transformation in the very fabric of the material world, then we also need to rethink longstanding questions about the relationship between performativity and the agency and labor of creative human subjects. To say that human agency is throttled by computational systems and their Silicon Valley purveyors, as the current arguments about algorithms and “surveillance capitalism” (Zuboff 2018) would have it, is not enough. After all, Judith Butler’s gender performativity also relies on a relatively constrained conception of agency in which gender identity is not only iterative and citational, but socially constituted by a panoply of forces beyond our direct comprehension (1988:519).
Yet, Hayek’s scale invariant move from neuron to body to market to cosmos also challenges existing notions of performativity from performance theory and linguistics, which still assume that the human agent and their work should be the central unit of analysis, even if that human is constrained or constituted by systems of social discipline and control. In the case of Hayek’s ongoing legacy flowing into the somehow related but incongruous world of big data and machine learning–driven neoliberalism, the human agent disappears into a network of connections and signals. As Slobodian articulates, in Hayek’s worldview “The autonomous individual is an illusory effect dependent on its relation to the whole—which, in turn, is dependent on that illusory effect” (Slobodian Reference Slobodian2018:229–32).
It would be strange if one entered an acting class or a directing seminar and was told that the key ingredient in creating a powerful performance was being able to respond to a barrage of statistics and patterns generated by a computer. But this might be where some artists are heading. The structuring of artistic performance events with AI systems could predominantly shift to “prompt engineering” and the hard work of organizing bodies and media for aesthetic purposes eventually shift towards responding to the probabilistic output of ever faster mathematical renderings of choices and decisions. In this sense, performing artists working with such generative systems should have no fear of being replaced by AI for they seem to already internalize the engineering worldview of pattern organization and matching that the machines spit out. In this way, Philip Mirowski’s observation that the leveling of relationships between humans and in this case, nonhuman machines, might produce neoliberal constructions of the human that we have not imagined, seems entirely appropriate. It is no longer only aesthetic expression that is at stake in the emerging performances of AI. It is also how we perceive our own fragile being in these systems in which a totality of knowledge diffused among human, machine, and environment becomes increasingly opaque.