Introduction
The increasing impact of artificial intelligence (AI) and smartphone technology comes at a unique juncture in the history of contemporary society, which is to say the history of the capitalist mode of production. At a global level, capitalism has been the prevailing economic, political, ideological, and social context for over a century, with neoliberalism as its guiding principle since the 1980s. As a result, all production during this period – be it the production of materials or knowledge – has been influenced by the context of (neoliberal) capitalism.
The nature of this influence is contested. From a Marxist perspective, the capitalist system is one which harnesses every area of social life and production towards its main purpose: capital accumulation (Parenti Reference Parenti1997, 122, 132–135). This means that the social world, the field of production, is a space where interests are either aligned with or in opposition to the objective of profit accumulation. Our agency, the things we do, produce, and control, as individuals and as a society, is relative to our position in this structure (Bourdieu Reference Bourdieu1987, 2).
Accepting this reality as integral to the motivations behind and outcomes of technological and social development, it is also central to this commentary on the relationship between AI and individual and collective memory. I propose here that the factors driving AI's implementation and trajectory, from weapons of war to social media, have the same capitalist and neoliberal roots as those impacting upon and weakening society's capacity for critical thought, reflection, and action. With the availability of incomprehensible amounts of information, in an era where the space for collective comprehension has been replaced by an infinite spectrum of individualistic consumerism, there is a risk that individual and collective memory – and, by extension, society's critical faculty – is on a myopic course.
It is precisely at a time when neoliberalism has restructured society on the basis of being an individualistic consumer, with narrow scope for individual or institutional opposition to this principle (Gilbert and Williams Reference Gilbert and Williams2022, 42, 77), that we are becoming increasingly dependent on technology that encourages us to retreat into highly personalised yet opaque algorithmic realities. Anything is possible in our own virtual worlds and feeds – our relationships can be as we want them to be. There, infinite choice and personalisation gives us a sense of power. Yet just as we have limited control over aspects of life such as housing, employment, privacy, and community, our virtual worlds are owned and controlled by unaccountable Silicon Valley elites. Their use, however ostensibly empowering and practical, is conditional on the forfeiture of personal and collective agency.
We use AI to augment our memory and understanding, just as AI uses us to enrich its database for providing that memory and understanding. This creates a memory loop or feed (Hoskins et al. Reference Hoskins, Čimová and Pilkingtonforthcoming); one where both components are conditioned by the framework of capitalism. The risk I identify here is that the loop becomes a spiral of capitalist hegemony, with each rotation alienating humans further and further from control of their own conditions, memories, and selves. Capitalism has long normalised the commodification of life and self. Yet with the astonishing scale of AI, whose ostensibly all-seeing and all-knowing capacity gives it a veneer of objectivity, in an age where there is no time to think about problems, only to solve them, the solutions of capitalism may soon be the only ones we are able to conceive.
There is a considerable body of work exploring the new memory ecologies of the 21st century. Theories of connective memory (Barnier and Hoskins Reference Barnier and Hoskins2022) and grey memory (Hoskins and Halstead Reference Hoskins and Halstead2021), for example, consider the impact of information overload and hyper-connected obscurantism in the digital age. Here, I propose that the current juncture in capitalist hegemony can be understood as an experience of myopic memory. This is where deep understanding is the enemy of instant gratification, where the capacity for critical action suffers with the prevalence of content consumption, and where the scope for agency in our lives is supplanted by one of a utility that is often technocratic and highly politicised.
The aim of this short commentary is to provide a preliminary conceptual framework for further empirical research and theoretical debate.
I develop the concept of myopic memory with two core claims:
1. To trade in human memory for AI memory is to narrow the scope of our understanding to the prism of capitalism.
AI, predicated on data accumulation, is currently developed, produced, and implemented within the context of a system whose primary objective is capital accumulation, meaning AI-generated or AI-supported memory is laden with the objectives of capitalism. Therefore, it is a memory with an explicit purpose, not necessarily in keeping with the interests of individuals and groups positioned less favourably in relation to capital. Uncritically accepting the memory bias at technology's backend is to narrow the scope through which we conceptualise ourselves and the world.
2. Under neoliberalism, we don't have the time or space to be critical: remembering is inconvenient.
Decades of neoliberal policies and ideas have alienated the working class from material security and organisational capacity. Soaring inequality between rich and poor, both within and between nations, has made society extremely precarious. There is limited time and space for deep comprehension and reflection. In this real-world context, cultivating a critical perception and organising a political challenge is inconvenient; it is much easier to survive through a virtual experience of life made simple by utility apps for bureaucratic digital navigation, uncomplicated relationships, and distractive dopamine escapes.
These claims and their relationship are expanded upon in the sections that follow. It is my conclusion and central argument that society is at risk of experiencing a collective myopy due to neoliberalism's reconstitution of memory, in the individualised age of the data commodity, as something algorithmically produced and accepted rather than mediated by a wider range of factors and social groups. Capitalism has long put limitations on our agency; it does so now under conditions that acutely undermine our capacity to ask why, or better still, to do anything about it.
Capitalist technological development under AI: Same game, new rules
My first core claim, on the myopic risks of uncritically accepting a capitalist version of past, present, and future, lays out capitalism's enduring history of using technology to distort and augment how we see ourselves and the world.
Anderson (Reference Anderson2006, 160–185) recalls the legacies of Western colonialism in southeast Asia, where subjugated populations were continually categorised through censuses and mapmaking to formalise the means by which their given status precluded certain rights. Anderson (Reference Anderson2006, 169) notes that these processes ensured populations were ‘mapped from on high’. Parallels can be naturally drawn with the age of AI, where individuals are constantly mapped from on high and exploited through algorithmic decisions that reflect an imagined ‘self’ or profile of the individual – through locations, spending patterns, and clicks – one that has been rendered from humanly incomprehensible amounts of personal data.
In 19th-century British colonial Malaya, censuses forced an extraordinary and ‘continuously agglomerated, disaggregated, recombined, intermixed, and reordered’ categorisation of subjugated Malay people (Anderson Reference Anderson2006, 163–164). Highly racialised categorical identity distinctions in Dutch East India Company Indonesia were imagined, quantified, and perpetuated to serve political ends. Indeed, these could see one's census categorisation determine how they ‘dress, reside, marry, be buried, and bequeath property’ (Anderson Reference Anderson2006, 168). The process Anderson outlines here is one of a deep alienation from one's own legacy, where the ‘official’ and highly political depiction has an enduring impact on the material realities of life and death.
Colonial states, driven by capitalism and technological developments in capitalism (then: print, now: AI) ‘did not merely aspire to create, under [their] control, a human landscape of perfect visibility; the condition of this “visibility” was that everyone, everything, had (as it were) a serial number’ (Anderson Reference Anderson2006, 184–185). This suggests that the model was one of dehumanisation – indeed, Césaire (Reference Césaire2000, 42) proposes that ‘colonisation = thingification’ (cited in Downey (Reference Downey2021)). Now, we are ‘things’ tracked and profiled with the most comprehensive serial number ever known: our digital footprint. The salient matter is establishing the extent of the risk posed by this extreme iteration of capitalism's longstanding tendency to categorise, objectify, and track us.
Many in the field of AI – including OpenAIFootnote 1 (creators of ChatGPTFootnote 2) – state their concern with the hypothetical risk of a ‘superintelligent’ AI ‘going rogue’ and threatening humanity (Leike and Sutskever Reference Leike and Sutskever2023). This is both obfuscatory and ironic; it kicks responsibility into the long grass. Capitalism already has a long-established precedent for crafting and maintaining a ‘superintelligence’ over the people it oppresses, defining their histories, and using it to map out their futures. Yet because neoliberal politics defers risk from the level of the state to that of the individual, companies can disavow the actual harm they cause now by deferring risk to a hypothetical point in the future.
But what might this look like in the age of AI?
AI is broadly understood as the capacity of a non-human machine to learn through repetition and recognition to the point where it can replicate human rationality in its actions (de-Lima-Santos and Ceron Reference de-Lima-Santos and Ceron2022, 14; Gil De Zúñiga et al. Reference Gil De Zúñiga, Goyanes and Durotoye2024, 30). A central feature of the advanced level of AI is its generative capacity. Generative artificial intelligence (GAI)Footnote 3 such as ChatGPT is powered by large language models (LLMs) that memorise patterns in data to predict future patterns. LLMs are able to make predictions after learning about millions, billions, or trillions of parameters (options and probabilities), derived from existing data available online such as articles, posts, and books (Mearian Reference Mearian2024).
These technological definitions provide important insight into the scale and potential of AI, both positive and negative. Yet it is the context in which AI is being produced and implemented that is of interest here. Forged under the pressure of global capitalism, whose remit drives technological and cultural developments in service of processes of profit accumulation (Mandel Reference Mandel and Marx1990 [1976]), AI is at once developing from and diligently reproducing a particular set of structural conditions.
The growing presence and influence of big tech conglomerates are a contemporary realisation of Lenin's (Reference Lenin2021 [1917]) theory that capitalism would produce monopolies, and that these would inhibit rather than encourage ‘healthy’ market competition. In January 2024, the Federal Trade Commission (FTC), the United States’ trade regulator, opened an investigation into whether the immense investment in AI technology from Microsoft, Amazon, and Google amounts to a breach of competition rules (Montgomery and Paul Reference Montgomery and Paul2024). Meanwhile, Sam Altman, co-founder of OpenAI and AI de-regulation lobbyist, has said that AI will ‘most likely lead to the end of the world, but in the meantime, there'll be great companies’ (Lovely Reference Lovely2024). Thus, the rampant emergence and advancement of technologies such as ChatGPT, and any benefits or threats posed, is inseparable from the context of capital accumulation and monopolisation as systemic economic objectives and outcomes.
This context pervades and shapes political outcomes too. With UK regulators concerned by the potential for LLMs to embed biases and distort markets, the government is reportedly developing legislation that will regulate AI (Gross and Criddle Reference Gross and Criddle2024). This suggests a reluctant departure from its ‘pro-innovation’ rejection of regulation in the past (Mosolova Reference Mosolova2023). The shift perhaps marks a recognition of the EU's ‘ground-breaking’ new AI ActFootnote 4, which aims to ‘set a global standard for AI regulation’ by classifying and prohibiting AI with obligations according to risk.
It is worth noting that AI regulation has existed for some time, yet there have been exemptions in the fields of policing, security, and migration services, meaning that legislation is vague and unable to provide society with greater democratic control while allowing private technology companies a stake in matters of public democracy (O'Shea Reference O'Shea2024). In this setting, the AI past can have a grave impact on present and future realities for vulnerable groups.
Both EU and US AI and immigration policies have failed to protect the privacy and rights of migrants; even the details of the new AI Act concerning border technologies and immigration fall short of the human rights and privacy-based standards advocated within academic research (Mengesha et al. Reference Mengesha, Dunn and Luangrath2024; Molnar Reference Molnar2023, Reference Molnar2024a). The world of AI regulation, and the crossover between the private sector and public sector in the way AI is applied to our daily lives, is incredibly murky, as capitalist states wrestle with AI's usefulness (read: profitability) vs the need to ensure it is only used on their terms.
Under the guise of risk assessment, North African and Middle Eastern migrants crossing the Mediterranean to seek asylum in Europe during the last decade have had every step of their journeys scrutinised, categorised, and assessed using a range of unregulated and experimental technology including surveillance drones, AI lie detectors, and robo-dogs (Molnar Reference Molnar2023; Tyler Reference Tyler2022). This ‘increasingly lucrative border industrial complex’ is predicated on an ‘opaque and discretionary world’ of border policing and security underpinned by historical and systemic structures of racism and discrimination (Molnar Reference Molnar2023).
Here, AI decision-making technology, very much in an experimental technological phase and clearly in contradiction with questions of ethics and human rights, has been loaded with longstanding biases in order that these may be amplified and applied to present political realities. Migrants have become ever more marginalised from the factors which determine their future, while an AI arbiter renders it from a political imagining of their past. In this highly racialised application of AI, asylum seekers yield all subjectivity to a two-pronged process of objectification. Firstly, because their material conditions become determined by their ‘self’ not recalled or revealed but applied to them by AI, and secondly, because they are dehumanised to the point of being an object of capitalist technological experimentation.
There is no more horrific an example of this dehumanisation than in that which Israel inflicts upon the Palestinian people, as part of an occupation that The United Nations General Assembly has deemed to be unlawful,Footnote 5 and Amnesty International refers to as a system of apartheid.Footnote 6 Israel's campaign in Gaza is being heard in the International Court of Justice (ICJ) under allegations of genocide,Footnote 7 and for which prosecutors from the International Criminal Court (ICC) believe Israeli Prime Minister Benjamin Netanyahu could bear responsibility for alleged war crimes and crimes against humanity.Footnote 8
Israel, whose chief arms exporters include the UK, US, and EU states such as Germany and France,Footnote 9 uses the ‘Lavender’ and ‘Where's Daddy?’ AI systems to produce targets for its bombing campaign in Gaza. The AI identifies (produces) targets through a debilitating and politicised surveillance of every aspect of Palestinian people's lives, and a dehumanising mechanism of social scoring that is now banned under the EU's own AI Act.Footnote 10 An investigation by +972 Magazine and Local Call revealed that the Israeli military barely scrutinised Lavender AI decisions on bombing targets despite knowing that the system made errors around 10% of the time, and that the Where's Daddy? system specifically bombed targets once they had entered their family homes (Abraham Reference Abraham2024). In the age of the data commodity, a once considered dystopian level of surveillance and violence threat is in fact a daily reality for the people of Palestine.
The ‘us’ that exists in data is incredibly valuable, and for those with control of the required technologies, it can be the determining factor in how and whether we exist. Downey (Reference Downey2021, 79–80) argues that while colonialism was built on a dehumanising process of occupation, labour, and wealth extraction that ‘deferred, if not truncated’ future realities, neocolonial data extraction and surveillance ‘establishes and, increasingly, pre-determines if not controls the future’. With this shifting character of imperialism in the age of AI, not only soil but cloud is ripe for colonisation. The future in these conditions is generated from data that is mediated by an algorithm rather than anything resembling a transparent let alone a democratic process.
Steyerl (Reference Steyerl2023) refers to a new ‘battle for the commons’, where ‘information, memories, [and] creativity’ exist in a chaotic digital public realm, owned by Big Tech and then rented back to us. It is an ‘open’ space that is in fact constrained by the implications of monopolistic control, a site of knowledge production claiming to benefit from common input yet in fact signalling an era of ‘automated common sense’ where ‘tech oligarchs consolidate their cultural hegemony through automated diffusion’. Data scraped from across the digital landscape holds the promise of diversity yet is instead stripped of its critical capacity and rendered homogenous by the general conditions of its lease. The AI memory is one of automated capitalist hegemony.
In her analysis of the deep-rooted biases and oppressive historical structures that underpin border technologies, Petra Molnar (Reference Molnar2024b, 6) says: ‘Technology is often presented as being neutral, but it is always socially constructed. All technologies have an inherently political dimension’. Notwithstanding the already politicised nature of border policing, migration, and war, when we apply AI to decision-making processes, it is important to remember that the AI's capacity for objective reasoning is constrained by the conditions of its production and implementation. It is simply an incomprehensibly large-scale aggregation, perpetuation, and distortion of the information we give it and the reasons we do so. The capacity of AI to weigh up immense quantities of data creates an illusion of objectivity, or rationality, yet the conclusions it reaches about people's histories reflect a highly prejudiced logic, defined by the capitalist system.
Therefore, to cede control of memory to AI is to cede control of memory to capitalism and its beneficiaries. As I have shown, this tendency has a long and enduring lineage in the history of capitalist technological development. A central risk here, as I discuss below, is one of timing; AI is coming of age during a period where neoliberalism has sharpened capitalism's retrenchment of individual and collective agency.
Remembering is inconvenient: The neoliberal assault on society's critical faculty
My second core claim explores the relationship between our increased dependency on artificial and individualistic technological solutions, and our alienation from the conditions where democratic solution-building occurs.
As with the production of technology, our collective and personal memories similarly reflect the social conditions under which knowledge, understanding, and recollection occurs. Tulving's (Reference Tulving2002) conceptualisation of memory is useful here. Tulving argues that human memory is unique in its tendency to build on semantic memories (a storage of general facts) with episodic memories (personally experienced events). Semantic memories, those recollections of events happening, are a mere starting point for episodic memories, our remembering and re-experiencing of how the event occurred and what it meant. The event is only given shape by its context, the set of social relations that prompt us to remember it in a certain way.
Neoliberal policies and subsequent social relations have reconstituted citizenship as the individualised and solitary pursuit of private wealth accumulation, at the expense of all other forms of social and cultural advancement (Harvey Reference Harvey2007, 35–36). In the UK, the post-war period was a time of relatively increased stability and reduced scarcity, which encouraged society to widen some of its democratic demands. To protect profits, neoliberalism sought to destabilise these conditions with a reinstatement of precarity. Gilbert and Williams (Reference Gilbert and Williams2022, 64–65) summarise this process in action:
Precarity, debt, and a generalised increase in average hours worked per week have created a situation in which groups and individuals simply have far less time and opportunity than they once had to engage in political organisation, struggle or reflection. None of this is accidental. (Emphasis added)
Neoliberal memory, then, is one of fragmentation and individuality. The resulting social world is one where democratic demands are replaced by consumeristic wants for tools that make life easier. Practical solutions for surviving crises are available and deliver immediate rewards; putting an end to crises is a bit more complicated. There are apps to help us deal with everything, including other app-created problems, multiple layers to a digital bureaucracy wherein everything is ostensibly being made easier to do, from ordering drinks on an airplane (Stewart Reference Stewart2023) to socialising (Cantor Reference Cantor2024). The defining purpose of these apps is utility; their essence is a commodification of hyper-individualised living, compelling us to buy more tech and forfeit more privacy year on year, app on app, and swipe on swipe (Hadero Reference Hadero2024).
This is evident from the technological solutions which simultaneously emerge from and create the loneliness crisis (Cantor Reference Cantor2024). Companionship apps such as AnyaFootnote 11 and ReplikaFootnote 12 provide ‘the AI companion who cares … always here to listen and talk … always on your side … an AI companion who is eager to learn and would love to see the world through your eyes’ (Replika). Users report that the use of AI chatbots for relationships has been beneficial for their wellness, stimulating rather than displacing their real-life relationships, and even preventing suicidal action (Maples et al. Reference Maples, Cerit, Vishwanath and Pea2024, 5). At the same time, these are users who may already be vulnerable and experience disproportionately high levels of loneliness, with an increased likelihood to view the Replika bot as more human than machine (Maples et al. Reference Maples, Cerit, Vishwanath and Pea2024, 5). Indeed, critics argue that chatbots inhibit humans’ emotional development as they limit exposure to real-world relationships rooted in conflict, compromise, and self-improvement (Hadero Reference Hadero2024). Thus, the chatbots, which ‘see the world through your eyes’, encourage a myopic retreat from this aspect of public life.
But what are the implications of this kind of retreat on knowledge, memory, and collective consciousness? Jager (Reference Jager2024) applies Hal Draper's Marxist interpretation of ‘idiocy’, which casts aspersions over the political apathy arising from private lives that withdraw concern for public matters (Attoh Reference Attoh2017, 198). The theory determines that a retreat from public life into individualised pursuits amounts to an increased ‘idiocy’ in society, which is not indicative of reduced intelligence but of ‘a fundamentally private predisposition – a retreat from public life, which implie[s] a generally unreflective attitude toward one's own opinions and views, let alone a coherent ideology’ (Jager Reference Jager2024).
Jager notes that this is not an AI-generated phenomenon. Rather, it is an iteration of a centuries-old implication of capitalism, and capitalism's destruction of physical and intellectual spaces for collective debate and conflict. Rooted in the American Dream's imperative that everyone pursues the solitary act of achieving financial wealth for themselves and their families alone, capitalism forces a retreat from collective endeavour. Neoliberalism sharpens this imperative, with practical implications, for example, on physical ‘third spaces’, such as the now decimated working-men's clubs of the late 20th century that were ‘designed neither for work nor consumption’ but for socialising (for example, watching a film together and talking about it) (Jager Reference Jager2024).
Thus, the loneliness crisis and associated retreat from physical and intellectual spaces which encourage collective reflection on events – with a view to collectively debating their interpretation, meaning, and future implications – is a pre-existing phenomenon, a product of capitalism's decaying effect on public life. Yet, it has an acute realisation in the age of AI. Of its contemporary, AI-driven iteration, Jager continues: ‘These [AI chatbot and dating app] fixes have both push and pull effects: once in existence, they rearrange the very notion of what intimacy means, while increased isolation only encourages more usage of the app’ (Jager Reference Jager2024).
This highly alienating dependency occurs beyond the realm of relationship chatbots. Internet and social media addiction is redefining the meaning and importance of authenticity and history altogether. Apps like Upscaling HistoryFootnote 13 use AI cloning to tell us what Hitler, Mussolini, and Lenin ‘would have sounded like in English’. There is an AI that tells us what it thinks Jack the Ripper's face would have looked like (Landsel Reference Landsel2024). These applications of AI do not hold history to account; they speculate, without scientific rigour, for entertainment. The gimmickification of history has arrived.
In a similar vein, Usher (Reference Usher2024) analyses the social media ‘content’ phenomenon as it occurs within hugely popular and lucrative boxing bouts involving social media ‘influencers’. We now have an algorithmically driven ‘cultural economy that rewards attention and engagement over artistry and genuine skill…It doesn't matter how competent these influencers are at fighting – as long as its ‘good content’ nobody cares’.
What does it take to be a good boxer? Can anyone remember? While being bombarded with social media content offering fragmented and surface-level realities, too many and too overwhelming to comprehend in any depth,Footnote 14 is anyone likely to find out? With content engagement of greater commercial value than content comprehension, what hope is there for memories that don't fit the mould?
Chang and Lee (Reference Chang and Lee2024) observe that internet addiction in adolescents results in a decreased capacity to process semantic memories, encode memories, and plan using the working memory. In this context, one where young people have limited space for individual and collective reflection, and an internet addiction that negatively impacts their cognition, memory has, at best, a puncher's chance. Meanwhile, society continues to spiral towards myopy, alienating its citizens increasingly further from meaning, truth, authenticity, and control.
Conclusion
The system which provides the framework and motivation for production is inseparable from that which is produced, be it knowledge, memory, interpretation, or technology. Understanding AI, then, and its potential role in how individuals and society remember and forget events, and conceptualise their presents and futures, is to understand the ways in which AI developments and our capacity to engage with them are products of the system giving shape to this and every other structural aspect of our lives. It is therefore of no coincidence that neoliberal society is increasingly structured on the basis that our algorithms, although highly personalised, serve a hegemonic worldview, one that affords users little consideration of the disparity between consumer choice and collective control. The concept of myopic memory that I have sketched out here aims to encourage critical reflection on where AI comes from, what it is being used for, and why. Any assessment of the merit, or technological potential of AI, must take into consideration this context.
Danny Pilkington is a postgraduate researcher of sociology at the University of Glasgow. His research interests include media power, ideology, and hegemony. His PhD thesis explores hegemony within media production and content, focusing on British media coverage of Latin American politics.