Skip to main content Accessibility help
×
Responsible AI in Public Administration
21 Oct 2024 to 31 Mar 2025


Responsible AI in Public Administration: 
Challenges at the Interface of Law, Governance, and Technology


Cambridge Forum on AI: Law and Governance publishes content focused on the governance of artificial intelligence (AI) from law, rules, and regulation through to ethical behaviour, accountability and responsible practice. It also looks at the impact on society of such governance along with how AI can be used responsibly to benefit the legal, corporate and other sectors. 

Following the emergence of generative AI and broader general purpose AI models, there is a pressing need to clarify the role of governance, to consider the mechanisms for oversight and regulation of AI, and to discuss the interrelationships and shifting tensions between the legal and regulatory landscape, ethical implications and evolving technologies. Cambridge Forum on AI: Law and Governance uses themed issues to bring together voices from law, business, applied ethics, computer science and many other disciplines to explore the social, ethical and legal impact of AI, data science, and robotics and the governance frameworks they require. 

Cambridge Forum on AI: Law and Governance is part of the Cambridge Forum journal series, which progresses cross-disciplinary conversations on issues of global importance. 

The journal invites submissions for the upcoming Themed Issue: “Responsible AI in Public Administration: Challenges at the Interface of Law, Governance, and Technology” Guest Edited by Jason Grant Allen (SMU Yong Pung How School of Law, Centre for AI & Data Governance) and David Lo (SMU School of Computing and Information Systems, Centre for Research in Intelligent Software Engineering).   

The deadline for abstract submissions is 31 March 2025. The deadline for full paper submissions is 31 August 2025. 

Theme and Selection Criteria

Artificial Intelligence (AI) is rapidly transforming the field of public administration—from decision-making in public agencies to the management of public infrastructure and services. The themed issue will explore the interdisciplinary challenges at the intersection of law, governance, and technology in ensuring the responsible deployment of AI in the public sector. 

In doing so, it seeks to address two key questions:

  1. What are the demands of Responsible AI (RAI) in public administration?
  2. How can these demands be operationalised effectively?

In tackling these questions, the issue will address the complexity of aligning open-ended and principles-based normative frameworks with the technical and operational demands of AI. Furthermore, it will explore the limits of technical interventions and the tools needed to maintain accountability when deploying algorithmic and data-driven decision-making systems in public administration. Potential contributions to the issue could address, but are not limited to, topics such as:

  • Practical and normative challenges in operationalising RAI in public administration.
  • Comparative legal and governance frameworks for AI deployment in public services.
  • Interdisciplinary models for ensuring accountability in AI-driven decision-making.
  • Algorithmic governance and its effects on existing administrative frameworks.
  • Human oversight and control mechanisms in AI-enabled public services.
  • Case studies exploring the application of RAI principles in specific public administration contexts.
  • Differences and similarities between RAI in public administration and other (eg corporate) contexts.

 

Irrespective of the substantive topic, submissions are welcomed that engage with the extent to which the demands of RAI can be operationalised in the technical infrastructure (eg, data, model, guardrails, constitutional AI, etc) and the complementary (non-technical) approaches and methods that might be used (eg, HMI, organisational design, governance standards, external accountability mechanisms). In particular, we welcome submissions that explore how technical and non-technical modalities of AI governance work together to promote the relevant principles of good administration. In this, we encourage papers that explore these issues from a comparative and interdisciplinary perspective, aiming to move beyond a two-track model of “lawyers/ethicists” and “technologists/businesspeople” working in isolation. Instead, we seek to explore approaches that integrate governance perspectives directly into the AI development and operations cycle.

Submission guidelines

Cambridge Forum on AI: Law and Governance seeks to engage multiple subject disciplines and promote dialogue between policymakers and practitioners as well as academics. The journal therefore encourages authors to use an accessible writing style.

Authors have the option to submit a range of article types to the journal. Please see the journal’s author instructions for more information.

Articles will be peer reviewed for both content and style. Articles will appear digitally and open access in the journal.

Please submit abstracts of no more than 300 words to [email protected] by 31 March 2025 with the subject "Cambridge Forum on AI Themed Issue". Accepted paper submissions should be made through the journal's online peer review system by 31 August 2025. Authors should consult the journal’s author instructions prior to submission.

All authors will be required to declare any funding and/or competing interests upon submission. See the journal’s Publishing Ethics guidelines for more information.

Contacts

Questions regarding submission and peer review can be sent to the journal’s inbox at [email protected]. Questions regarding the Themed Issue can be sent to [email protected].