While artificial intelligence (AI) has its roots in 1950s decision science, it has burst onto the public consciousness in the past two years.Footnote 1 With the public release of ChatGPT, development and deployment of autonomous vehicles, myriad stories of misidentification of suspects by police using facial recognition, and much more, citizens are more aware of the potential consequences of unregulated AI. However, businesses and governments alike are aware of the vast potential for AI to reshape national economies and global commerce.Footnote 2 As with any emergent and disruptive technology, governments must consider the policy balance between fostering innovation and preventing negative externalities. In fact, divergent framings of new technology by innovators and regulators shape their willingness to accept risk and can stifle the commercialization of new technology.Footnote 3
Our aim with this special issue on the future of AI politics, policy, and business is to give space to considering how these balances are, and perhaps should, be pursued by the public and private sectors. Ultimately private firms and regulators will need to work collaboratively, given the complex networks of actors involved in AI development and deployment and the potential for the technology to alter existing policy regimes.Footnote 4 We begin the introduction of this special issue of Business & Politics with a discussion of the growth in AI technology use and discussions of appropriate governance, followed by a consideration of how AI-related politics, policy, and business intersect. We then summarize the contributions of the authors in this issue and conclude with thoughts about how political science, public administration, and public policy scholars have much to offer, as well as much to study, the establishment of effective AI governance.
AI governance
Public organizations carry a distinct responsibility to provide constituents with public value. The revolutionary role of AI in the public sector, committed to harnessing machine learning (ML) and particularly deep learning, underscores the potential for automating processes within public organizations and enhancing human intelligence.Footnote 5 The combination of AI and human resources to augment public goods and services delivery promises to enhance efficiency and effectiveness delivering optimum public value. The European Commission published an official definition for AI as “systems that display intelligent behavior by analyzing their environment and taking actions—with some degree of autonomy—to achieve specific goals.”Footnote 6 Consequently, it is unsurprising to observe public organizations and the private sector collaborating to adopt AI to augment public services. Scholars have consistently noted shortcomings in public organizations, especially when attempting to reach a balance between efficiency and economic objectives, resulting in failures to deliver public value and meet expectations.Footnote 7
Other important observations made by scholars are the consequences of releasing AI as a public value delivery, not understanding the opportunities and risks associated with deploying such systems in the public sector.Footnote 8 Despite potential implications and to address the economic and social challenges, public managers using discretionary decision-making have introduced advanced ML algorithms (e.g., deep learning) to process large amounts of data to improve predictions and decision-making processes.Footnote 9 However, the use of discretionary decision-making has broader considerations, particularly involving bureaucratic and citizen engagement, that should be considered before introducing advanced technologies that may lead to unexpected consequences.Footnote 10
AI systems can operate by being controlled through human input, requiring guidance to engage in a creative and interactive process between humans and machines.Footnote 11 On the other hand, AI software systems can also operate autonomously, learning independently and making decisions without direct human intervention.Footnote 12 Scholars researching AI implications in the public sector take notice of areas where governments have introduced AI with strategic objectives.Footnote 13 With the proliferation of large data banks and powerful computing systems, AI can access real-world data, analyze, reason, learn, and perform processes involving natural language, vision, robotics, neural networks, and even genetic algorithms.Footnote 14 It is not surprising to witness AI’s current use in general citizen services, financial or economic administrative tasks, environmental organizations leveraging ML to improve and protect the environment, transportation, energy, farming, and various other sectors.Footnote 15
Public organizations often face challenges due to a lack of human resources, which hinders their ability to provide high-quality services to citizens. The literature highlights the potential of AI to alleviate this resource constraint in public agencies by handling tasks such as responding to inquiries, navigating through government services, and searching documents, directing requests, translating, and composing documents.Footnote 16 For example, AI is already providing citizen assistance with renewing a driver’s license, health and human services complex process, and even interacting with elected officials.Footnote 17 The use of AI to assist citizens has relieved some pressure on government agencies, but it has also prompted concerns about digital privacy and security, particularly regarding the responsibility for protecting citizen data.Footnote 18 Another challenge is determining the ownership of the data inputted by citizens into these AI systems. For AI to be efficient and effective, it often requires resources from various government agencies, making it crucial for public organizations to be accountable for safeguarding citizens’ data.Footnote 19 This situation could potentially cause confusion in public agencies, especially when they are simultaneously held accountable for ensuring the protection of citizens’ data.Footnote 20
Undoubtedly, AI stands as a promising technology poised to benefit both U.S. public organizations and the American people. However, it is imperative to harness this technology within a governance framework that revolves around policies safeguarding U.S. citizens from potential data privacy issues and the intentional or unintentional misuse of technology.Footnote 21 Recognizing the urgency of an AI policy, the White House Office took a significant step in October 2022 by issuing the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.”Footnote 22 This executive order established a framework to guide the ongoing development and usage of AI systems across both public and private organizations.
Blueprint Bill of Rights Principles:
-
Safe and Effective Systems:
-
Protection from Algorithmic Discrimination and Inequitable Systems:
-
Protection from Abusive Data Practices and Agency over Personal Data:
-
Knowledge and Understanding of Automated System Usage and Impacts:
-
Ability to Opt Out for a Human AlternativeFootnote 23
The AI Bill of Rights considers the required guidelines for safe and effective systems, protection from algorithmic discrimination, safeguards against abusive data practices, awareness of automated system usage and its impacts, and the ability to opt out for a human alternative.
The introduction of the new US policy on AI has much broader implications, including ethics and responsible AI for present and future technology, requiring collaborative governance and new public policy.Footnote 24 It also emphasizes the importance of establishing leadership and coordination between public and private organizations for emergent technologies such as AI algorithms.Footnote 25 To this end, the U.S. Government Accountability Office’s Science, Technology, and Analytics (STAA) has a long history of leadership through external advisory boards that can complement experience from a diverse group, including scientific and engineering experts (Bailey, Reference Bailey2022).Footnote 26 This contemporary approach to governing AI aligns with a Bill of Rights that includes society’s current and future needs, especially those affected by AI technology. But much remains in working out the details of how governments will balance the risks and benefits of AI technology across so many different sectors of the economy.
The discourse on AI governance at the international level is increasingly focused on establishing comprehensive norms and standards that address the rapid advancements and widespread applications of AI technologies. Formulating international security governance norms for AI is a highly complex endeavor. A major challenge is achieving consensus among global powers, particularly between China and the United States.Footnote 27 Further, there is an imperative to ensure responsible innovation through a set of shared ethical principles.Footnote 28 The collaborative efforts between the EU and the US in setting international regulatory standards reflect a strategic move towards a governance model that not only upholds shared democratic values but also proactively addresses the multifaceted risks associated with AI technologies.Footnote 29
Moreover, the aspect of political legitimacy in global AI governance is critical, with scholars calling for democratic processes in ensuring the legitimacy of AI governance mechanisms.Footnote 30 The inclusion of Global South stakeholders in the AI governance dialogue brings to light the importance of an inclusive and equitable approach to governance, highlighting the need for a systemic restructuring to bridge existing governance gapsFootnote 31 : This inclusive approach is pivotal for crafting a global AI governance framework that is both responsive and responsible, addressing key concerns such as infrastructural and regulatory monopolies and ensuring that the benefits of AI technologies are equitably distributed.
AI, politics, policy, and business
As noted above, seemingly overnight, non-technology business leaders, politicians, and regulators have all experienced the explosion of AI.Footnote 32 These decision-makers juggled questions about productivity gains, employment reductions, automation, and accountability. The “no code” aspect of ChatGPT also made it possible for a wide swath of the population to suddenly interact with artificial intelligence directly, helping to further increase the AI discourse.
AI’s disruption of the business world is immediately obvious. Businesses now need to consider how and when jobs will change because of AI. Almost immediately after the launch of ChatGPT, stories appeared about work disappearing for people doing tasks like basic technical writing.Footnote 33 We do not know exactly what these medium-term employment changes and completements will look like, but it is already clear there are differences in employment risk across countries, worker demographics, geographic area, and job types.Footnote 34 Management consultants predict an era of increased occupational transition as automation comes for more job types.Footnote 35 For young professionals, there is a specific risk that AI will make it more difficult to move beyond “entry level” jobs, because a constant stream of newcomers will be able to gain basic skills with more speed thanks to assistance from AI.Footnote 36 It is worth nothing that AI is changing thoughts about “safe” careers, as “knowledge workers” join the ranks of those at risk from automation, and that change may garner outsized attention from politicians and policy makers.
There are already isolated cases of AI use and misuse in more consequential professional tasks like legal arguments.Footnote 37 These occurrences cross over into the world of politicians and policymakers, where they must consider whether AI should be allowed to submit legal arguments, write news articles, or drive vehicles. These disruptive changes happen suddenly and intermittently, and there is the constant risk of overregulation of emerging technologies preventing productive use as technologies mature. The implications for business and politics feed into the need for good policy, and this is clearly a situation where technological innovation exceeds regulatory innovation. This technical reality, combined with the possibility of different rules in different localities (or the preemption of different rules from above), contains all the ingredients of a regulatory nightmare.
Often, there is media focus on AI mishaps, such as AI hallucinations, physically impossible pictures, self-driving accidents, deepfakes, and other ineffective or dangerous instances which receive lots of coverage. As the pieces in this special issue highlight, focusing on these AI missteps may conceal the many ways that AI is already working seamlessly and causing meaningful change, and lead to ineffective regulation that does little to control the less newsworthy changes that AI brings. By no means should we ignore “AI gone wrong,” but regulators should also focus on AI uses that quickly normalize without becoming a part of the AI discourse. Oftentimes, these technological changes are already ubiquitous before regulators and the public are even aware of them, and limitations upon use will lead to the deprecation of tools that people are already familiar with using, even if they were not aware of the role of AI and associated data collection or privacy violations. These cases are likely to occur more often as AI development matures.
As businesses navigate the seismic shifts brought about by AI, their role extends beyond internal adaptations. They are increasingly becoming key players in the shaping of AI policies and regulations.Footnote 38 This involvement is crucial, as the rapid evolution of AI technologies demands a collaborative approach to governance that includes insights from industry leaders alongside policymakers and regulators.Footnote 39 Business leaders, recognizing the profound implications of AI on their operations and the broader industry landscape, are actively engaging in dialogues around ethical AI use, data privacy, and the equitable deployment of AI technologies.Footnote 40
This engagement is not merely a matter of compliance; it is a strategic imperative.Footnote 41 Companies at the forefront of AI adoption are leveraging their expertise to influence policy frameworks that foster innovation while safeguarding against potential harms. By contributing to the policy discourse, businesses help ensure that regulations are informed by practical insights and are adaptable to the pace of technological advancement.Footnote 42
Moreover, AI is not just altering existing industries; it is creating entirely new categories of services and products, thereby reshaping market dynamics and competitive landscapes.Footnote 43 From healthcare to finance, AI’s integration is enabling more personalized services, enhanced decision-making capabilities, and operational efficiencies.Footnote 44 However, as industries transform, so too do the regulatory challenges they face. Businesses are therefore not just participants in policy discussions; they are co-creators of the regulatory environment that will define the future of AI in industry.Footnote 45
In this context, the collaboration between businesses, policymakers, and regulatory bodies becomes a critical factor in ensuring that AI development is both innovative and responsible.Footnote 46 This collective effort is essential to balance the economic and social benefits of AI with the need to address ethical considerations and potential risks, thereby paving the way for a future where AI contributes positively to both industry growth and societal well-being.Footnote 47
Themes of the special issue
While there are myriad directions that one can consider when reflecting on AI politics, policy, and business, the articles in this special issue coalesced around two themes. The first theme relates to the challenges of regulating emergent AI technology. The second is also about regulation, but from the perspective of subnational governments. More specifically, these papers consider the prominent role of the American states in AI governance innovation in the United States. Given the place of the U.S. in global politics and commerce, this means that the states have a role in the future of global AI policy.
Regulation
Han considers the decisions by national governments to implement data localization.Footnote 48 Data are increasingly considered a strategic asset by governments. This has significant implications for emergent AI technologies that require large amounts of data for algorithm training. The impacts of AI blur the boundaries between public and private sectors. Countries have adopted data localization rules that require data to be stored domestically to both foster technological innovation and protect sensitive data. However, data localization is recognized as a burden on businesses and a drag on economic productivity. Han compares three cases—Vietnam, Singapore, and Indonesia—to consider why states localize. The argument has two parts—harnessing the economic benefits of networks and security externality. First, if states have a negative perception of the network of platforms, their malleability, and economic benefits, this increases the likelihood of data localization. Second, when domestic and/or foreign platforms are perceived to threaten domestic security, states are also more likely to implement data localization. Granted, the result emerges from a complex interplay of economic and national security concerns. The cases further illustrate that the strategic nature of data localization decisions shows how countries are influencing each other’s policy responses to AI.
Kennedy, Ozer, and Waggoner address the extent to which algorithm-assisted decision making by governments may erode public trust and accountability.Footnote 49 Particularly within the criminal justice system, both ethicists and the public have expressed concerns about racial bias and accuracy in algorithm-driven decision-making.Footnote 50 This is done with three pre-registered survey experiments with representative samples of the U.S. population. The authors find that respondents do not either dislocate blame when a judge makes a mistake by concurring with an algorithmic decision or magnify blame when they make a mistake after ignoring algorithmic input. That said, there are conditional effects based on respondents’ level of trust in experts. Those with greater trust in experts are more likely to blame them for mistakes if the algorithmic decision is ignored.
Tallberg, Lundgren, and Geith consider the views of non-state actors over the European Union’s (EU) groundbreaking AI Act.Footnote 51 As the EU’s actions on AI are considered a standard-setter and front-runner for national AI policies globally,Footnote 52 the lessons from Lundgren’s work render insights for future political conflicts. Dividing actors based on whether they are motivated by profit, Lundgren finds that while profit-driven actors (i.e., businesses) are critical of AI regulations that might inhibit innovation (which is to be expected), that relationship is conditional on the strength of a nation’s commercial AI sector. Importantly, all actors recognize the need for some regulation of emergent technology. Governments in countries with growing AI commercial sectors may find themselves in a difficult position of both being in the greatest need for AI regulation, but also facing increasing resistance from for-profit non-state actors in adopting it.
Subnational innovation
Parinandi, Crosson, Peterson, and NadarevicFootnote 53 and Mallinson, Azevedo, Best, Robles, and WangFootnote 54 consider the substantial role that the American states will play in setting AI policy in the United States. To date, the U.S. national government has taken a largely hands-off approach to AI regulation. Thus, some states are taking an active role in incentivizing and/or regulating the industry. Parinandi et al. focus on the politics of policy adoption. They argue that parties operating in the U.S.’s hyperpolarized political environment are likely to latch onto aspects of AI policy that match their brands. Using explanatory modeling of roll call votes and bill adoptions, he argues that both the economy and politics have shaped AI regulation in the states. Namely, Democratic legislators are more supportive of AI legislation that includes consumer protection, but AI legislation is less likely during times of high unemployment and inflation.
By focusing on state autonomous vehicle policy, Mallinson et al. make the argument that the state-level policy experiments will be the future of AI policy in the United States. In making this argument they consider the substantial regulatory fragmentation that exists in the United States due to federalism and the separation of powers. This results in a dynamic environment that affects the market and nonmarket strategies of firms. Furthermore, such regulatory fragmentation, which results in inconsistent policies across states, raises significant equity concerns. The case of AV policy supports each of these arguments, while also raising concerns about the administrative burdens of layering new AI policies on top of existing laws. They conclude by proposing a research agenda centered on state AI policy.
Conclusion
As AI technology rapidly shifts how many industries operate globally, governments are struggling to find the way forward in developing flexible, yet protective, policies. The exact balance of acceptable benefits versus risks will ultimately differ across political geographies, but efforts are also being made to establish more general governance principles.Footnote 55 Scholars in political science, public administration, and public policy have much to offer in both theorizing and understanding the myriad implications of AI and for making recommendations on governance. However, truly convergent science that bridges these studies with those in ethics, management, human resources, business administration, and more will also be required.Footnote 56 This special issue is one of the many what will be required to give space to working out ideas on AI politics, policy, and business.