Individual safeguards in the era of AI: fair algorithms, fair regulation, fair procedures
Guest Editors: Ljupcho Grozdanovski[1] and Jérôme De Cooman[2]
Artificial Intelligence (AI) is rapidly transforming various sectors of society, raising questions about fairness, equity, and justice in the development and deployment of these technologies. As AI systems increasingly influence decisions that impact individuals and communities, it is crucial to ensure that these systems are designed and implemented in ways that promote fairness and prevent harm. For this themed issue of the Cambridge Forum on AI: Law and Governance, we invite authors to submit papers that explore the dimensions of fairness from the individual’s perspective, examining the types, methods, and effectiveness of individual safeguards that should be provided across the various sectors where AI technologies are becoming prominent and where the risk of harm is increasingly significant.
Researchers in various stages of career advancement (doc, post-doc, Assist. and Assoc. Prof) active in IT, the social sciences (sociology, political science, law and economy) and the humanities (philosophy, history, linguistics) are encouraged to present submissions in line with, but not limited to, the following topics:
- Theoretical foundations of fairness in AI: exploring philosophical and ethical frameworks for the definition and assessment of fairness in the field of AI.
- Algorithmic bias and discrimination: examining the sources, detection, and mitigation of bias in AI systems, and their impact on marginalized communities.
- Fairness in AI applications: case studies and analyses of fairness in specific AI applications, such as healthcare, criminal justice, finance, education, and hiring.
- Regulatory and policy perspectives: investigating the role of regulation and policy in promoting fairness in AI, including discussions of existing and proposed legal frameworks.
- Human-AI interaction: understanding how fairness can be integrated into the design of AI systems that support human decision-making.
- Transparency and accountability: addressing the importance of transparency, explainability, and accountability in ensuring fair outcomes of AI systems and of judicial instances dealing with those systems.
- Intersectional approaches to fairness: considering how intersecting social identities (e.g., race, gender, class) influence the fairness of AI systems and the experiences of different groups.
- Public perceptions and trust: uncovering how public perceptions of AI fairness influence trust and acceptance of AI technologies.
- Global perspectives on fairness in AI: exploring how fairness in AI is conceptualized and implemented across different societal, political, cultural and geographical contexts.
All submissions must be in English and include the following:
- Cover page - full name(s), affiliation, contact information and title of the submission.
- Paper - 6,000 to 10,000 words, incl. footnotes, list of references is not mandatory, font Times New Roman, 12, single spacing.
- Short CV (500 words).
- References should be complete and consistently follow APA style.
For additional information, interested authors are invited to read the authors’ guidelines available on the Cambridge forum on AI: Law and Governance website.
All submitted papers will undergo a double-anonymous peer-review. The reviewers will assess the formulation, originality and relevance of the selected topics as well as the quality and coherence of the main arguments (and the normative claims, if any) presented by the authors. Papers featured in the themed issue will be published in Cambridge Forum on AI: Law and Governance.
[1] Associate Research Professor FNRS/University of Liège.
[2] Assocated Professort, University of Liège.