Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-02T22:57:38.383Z Has data issue: false hasContentIssue false

Artificial Intelligence and Sentencing Practices: Challenges and Opportunities for Fairness and Justice in the Criminal Justice System in Sri Lanka

Published online by Cambridge University Press:  31 January 2025

Muthukuda Arachchige Dona Shiroma Jeeva Shirajanie Niriella*
Affiliation:
Department of Public and International Law, Faculty of Law, University of Colombo, Sri Lanka

Abstract

Artificial intelligence (AI) is increasingly being integrated into sentencing within the criminal justice system. This research examines the impact of AI on sentencing, addressing the challenges and opportunities for fairness and justice. The main problem explored is AI’s potential to perpetuate biases, undermining fair-trial principles. This study intends to assess AI’s influence on sentencing, identify legal and ethical challenges, and propose a framework for equitable AI use in judicial decisions. Key research questions include: (1) How does AI influence sentencing decisions? (2) What concerns arise from AI in sentencing? (3) What safeguards can mitigate those concerns and prejudices? Utilizing qualitative methodology, including doctrinal analysis and comparative studies, the research reveals AI’s potential to enhance sentencing efficiency but also to risk reinforcing biases. The study recommends robust regulatory frameworks, transparency in AI algorithms, and judicial oversight to ensure AI supports justice rather than impedes it, advocating for a balanced integration that prioritizes human rights and fairness.

Abstracto

Abstracto

La inteligencia artificial se está integrando cada vez más en las sentencias dentro del sistema de justicia penal. Esta investigación examina el impacto de la IA en las sentencias, abordando los desafíos y las oportunidades para la equidad y la justicia. El principal problema explorado es el potencial de la IA para perpetuar sesgos, socavando los principios de un juicio justo. Este estudio pretende evaluar la influencia de la IA en las sentencias, identificar desafíos legales y éticos y proponer un marco para el uso equitativo de la IA en las decisiones judiciales. Las preguntas clave de la investigación incluyen: (1) ¿Cómo influye la IA en las decisiones de sentencia? (2) ¿Qué preocupaciones surgen de la IA en las sentencias? (3) ¿Qué salvaguardas pueden mitigar esas preocupaciones y prejuicios? Utilizando una metodología cualitativa, que incluye análisis doctrinal y estudios comparativos, la investigación revela el potencial de la IA para mejorar la eficiencia de las sentencias, pero también corre el riesgo de reforzar los sesgos. El estudio recomienda marcos regulatorios sólidos, transparencia en los algoritmos de IA y supervisión judicial para garantizar que la IA apoye la justicia en lugar de obstaculizarla, abogando por una integración equilibrada que priorice los derechos humanos y la equidad.

Abstrait

Abstrait

L’intelligence artificielle est de plus en plus intégrée dans la détermination des peines au sein du système de justice pénale. Cette recherche examine l’impact de l’IA sur la détermination des peines, en abordant les défis et les opportunités en matière d’équité et de justice. Le principal problème exploré est le potentiel de l’IA à perpétuer les préjugés, sapant les principes du procès équitable. Cette étude vise à évaluer l’influence de l’IA sur la détermination des peines, à identifier les défis juridiques et éthiques et à proposer un cadre pour une utilisation équitable de l’IA dans les décisions judiciaires. Les principales questions de recherche sont les suivantes : (1) Comment l’IA influence-t-elle les décisions de détermination des peines ? (2) Quelles préoccupations découlent de l’IA dans la détermination des peines ? (3) Quelles garanties peuvent atténuer ces préoccupations et ces préjugés ? En utilisant une méthodologie qualitative, y compris une analyse doctrinale et des études comparatives, la recherche révèle le potentiel de l’IA à améliorer l’efficacité de la détermination des peines, mais risque également de renforcer les préjugés. L’étude recommande des cadres réglementaires solides, la transparence des algorithmes d’IA et une surveillance judiciaire pour garantir que l’IA soutient la justice plutôt que de l’entraver, en plaidant pour une intégration équilibrée qui donne la priorité aux droits de l’homme et à l’équité.

摘要

摘要

人工智能正越来越多地融入刑事司法系统的量刑之中。本研究考察了人工智能对量刑的影响,探讨了公平正义的挑战和机遇。探讨的主要问题是人工智能可能延续偏见,破坏公平审判原则。本研究旨在评估人工智能对量刑的影响,确定法律和道德挑战,并提出公平使用人工智能进行司法决策的框架。关键研究问题包括:(1)人工智能如何影响量刑决策?(2)人工智能在量刑中引发了哪些担忧?(3)哪些保障措施可以减轻这些担忧和偏见?该研究利用定性方法,包括理论分析和比较研究,揭示了人工智能提高量刑效率的潜力,但也有可能加剧偏见。该研究建议建立强有力的监管框架、人工智能算法的透明度和司法监督,以确保人工智能支持正义而不是阻碍正义,倡导以人权和公平为优先的平衡融合。

ملخص

ملخص

يتم دمج الذكاء الاصطناعي بشكل متزايد في عملية إصدار الأحكام داخل نظام العدالة الجنائية. يدرس هذا البحث تأثير الذكاء الاصطناعي على إصدار الأحكام، ومعالجة التحديات والفرص المتاحة لتحقيق العدالة والإنصاف. تتمثل المشكلة الرئيسية التي تم استكشافها في إمكانية الذكاء الاصطناعي في إدامة التحيزات وتقويض مبادئ المحاكمة العادلة. تهدف هذه الدراسة إلى تقييم تأثير الذكاء الاصطناعي على إصدار الأحكام، وتحديد التحديات القانونية والأخلاقية، واقتراح إطار للاستخدام العادل للذكاء الاصطناعي في القرارات القضائية. تشمل أسئلة البحث الرئيسية: (1) كيف يؤثر الذكاء الاصطناعي على قرارات إصدار الأحكام؟ (2) ما هي المخاوف التي تنشأ عن الذكاء الاصطناعي في إصدار الأحكام؟ (3) ما هي الضمانات التي يمكن أن تخفف من هذه المخاوف والتحيزات؟ باستخدام المنهجية النوعية، بما في ذلك التحليل العقائدي والدراسات المقارنة، يكشف البحث عن إمكانات الذكاء الاصطناعي لتعزيز كفاءة إصدار الأحكام ولكنه يخاطر أيضًا بتعزيز التحيزات. توصي الدراسة بأطر تنظيمية قوية، وشفافية في خوارزميات الذكاء الاصطناعي، والإشراف القضائي لضمان دعم الذكاء الاصطناعي للعدالة بدلاً من إعاقتها، والدعوة إلى تكامل متوازن يعطي الأولوية لحقوق الإنسان والعدالة.

Type
Article
Copyright
© International Society of Criminology, 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Allegheny County Department of Human Services. 2019. Allegheny County Risk Assessment Tool: Overview and Results. Allegheny County, PA: Department of Human Services.Google Scholar
American Bar Association. 2021. Criminal Justice Standards on Sentencing, 4th edn. Chicago, IL: American Bar Association.Google Scholar
Angwin, J., Larson, J., Mattu, S., and Kirchner, L.. 2016. “Machine Bias.” ProPublica, 23 May 2016, retrieved 12 December 2024 (https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing).Google Scholar
Ashworth, A. 2013. Principles of Criminal Law. Oxford: Oxford University Press.Google Scholar
Barabas, C., Dinakar, K., Ito, J., Virza, M., and Zittrain, J.. 2018. “Interventions Over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” Proceedings of Machine Learning Research 81:6279.Google Scholar
Barocas, S. and Nissenbaum, H.. 2014. “Big Data’s End Run Around Procedural Privacy Protections.” Communications of the ACM 57(11):31–3.CrossRefGoogle Scholar
Binns, R. 2018. “Fairness in Machine Learning: Lessons from Political Philosophy.” Proceedings of Machine Learning Research 81:149–59.Google Scholar
Binns, R., Veale, M., Van Kleek, M., and Shadbolt, N.. 2018. “The Myth of AI: An Empirical Study of AI and Criminal Justice.” ACM Conference on Fairness, Accountability, and Transparency.Google Scholar
Brennan Center for Justice. 2021. Artificial Intelligence in Sentencing: A Review of Impact and Implementation. New York: Brennan Center for Justice.Google Scholar
Chouldechova, A. 2017. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5(2):153–63.CrossRefGoogle ScholarPubMed
Citron, D. K. and Pasquale, F.. 2014. “The Scored Society: Due Process for Automated Predictions.” Washington Law Review 89(1):133.Google Scholar
Danziger, S., Levav, J., and Avnaim-Pesso, L.. 2011. “Extraneous Factors in Judicial Decisions.” Proceedings of the National Academy of Sciences 108(17):6889–92.CrossRefGoogle ScholarPubMed
Department of Police Sri Lanka. 2023. “Grave Crime Abstract for Whole Island from 01.01.2022 to 31.12.2022.” (https://www.police.lk/wp-content/uploads/2023/07/Disposal-of-Grave-Crimes-FROM-01.01.2022-TO-31.12.2022.pdf).Google Scholar
Eaglin, J. M. 2017. “Constructing Recidivism Risk.” Emory Law Journal 67(1):59112.Google Scholar
Fernando, A. 2023. “The Challenges of Integrating AI into the Sri Lankan Criminal Justice System.” Pp. 154–75 in Law, Technology, and Society: A Sri Lankan Perspective, edited by Perera, S.. Colombo: University of Colombo Press.Google Scholar
Friedman, L. 2019. “Judicial Discretion in Sexual Assault Cases: Public Outcry and Sentencing.” Criminal Law Review 45(1):6789.Google Scholar
Fundamental Rights Agency. 2020. Getting the Future Right: Artificial Intelligence and Fundamental Rights. Luxembourg: Publications Office of the European Union.Google Scholar
Goodman, B. and Flaxman, S.. 2017. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’.” AI Magazine 38(3):50–7.CrossRefGoogle Scholar
Government Accountability Office. 2021. “Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (GAO-21-519SP).” June 2021, retrieved 13 December 2024 (https://www.gao.gov/assets/gao-21-519sp.pdf).Google Scholar
Grother, P., Ngan, M., and Hanaoka, K.. 2019. Face Recognition Vendor Test (FRVT). Part 3: Demographic Effects. Gaithersburg, MD: National Institute of Standards and Technology.CrossRefGoogle Scholar
Hannah-Moffat, K. 2013. “Actuarial Sentencing: An ‘Unsettled’ Proposition.” Justice Quarterly 30(2):270–96.CrossRefGoogle Scholar
Institute of Electrical and Electronics Engineers (IEEE). 2019. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, 1st edn. Piscataway, NJ: IEEE Standards Association.Google Scholar
Lind, D. 2017. “Sexual Assault, Justice, and Public Opinion: The Brock Turner Case.” Journal of Law and Social Policy 32(2):123–45.Google Scholar
Lum, K. And Isaac, W.. 2016. “To Predict and Serve? A Case Study of Predictive Policing in the Los Angeles Police Department.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 1357–69.Google Scholar
MDRC. 2011. Using Evidence to Improve Social Policy and Practice: The Next Phase of MDRC’s Work. New York: MDRC.Google Scholar
MDRC and Wisconsin Department of Corrections. 2019. Evaluation of the COMPAS Risk Assessment Tool: Impact on Recidivism Rates and Sentencing Efficiency. Madison, WI: Wisconsin Department of Corrections and MDRC.Google Scholar
Miller, K. 2020. “AI in Pretrial Risk Assessment: Impacts on Bail and Sentencing.” Journal of Criminal Justice Policy 45(2):341–55.Google Scholar
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., and Floridi, L.. 2016. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society 3(2):121.CrossRefGoogle Scholar
National Institute of Justice. 2019a. Exploring the Use of Artificial Intelligence in the Criminal Justice System: Transparency and Accountability. Washington, DC: US Department of Justice.Google Scholar
National Institute of Justice. 2019b. The Public Safety Assessment: A Study on Reducing Pretrial Detention and Enhancing Judicial Efficiency. Washington, DC: US Department of Justice.Google Scholar
Niriella, M. A. D. S. J. S. 2012a. On Punishment, A Critical Review of the Punishment in Some Aspects of Criminal Law in Sri Lanka. Riga: Lambert Academic Publishing Co.Google Scholar
Niriella, M. A. D. S. J. S. 2012b. “The Most Appropriate Degree of Punishment: Underline Policies in Imposing Punishment in Criminal Cases with Special Reference to Sri Lanka.” 2nd International Conference, Social Sciences and Humanities, IPEDR, Vol. 31. Singapore: IACSIT Press.Google Scholar
O’Neil, C. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing Group.Google Scholar
Osoba, O. A. and Welser, W.. 2017. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Santa Monica, CA: RAND Corporation.Google Scholar
Oswald, M., Grace, J., Urwin, S., and Barnes, G. C.. 2018. “Algorithmic Risk Assessment Policing Models: Lessons from the Durham HART Model and ‘Experimental’ Proportionality.” Information & Communications Technology Law 27(2), 223–50.CrossRefGoogle Scholar
Perera, N. 2023. “Artificial Intelligence in Sri Lankan Law Enforcement: A Study of Challenges and Opportunities.” Sri Lanka Journal of Criminal Law 5(1):4560.Google Scholar
Perera, S. 2020. “Judicial Discretion in Sentencing: A Sri Lankan Perspective.” Sri Lanka Journal of Criminal Law 4(1):101–21.Google Scholar
Raji, I. D. and Buolamwini, J.. 2019. “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms.” Journal of Artificial Intelligence Research 67:141–66.Google Scholar
Richardson, R., Schultz, J. M., and Crawford, K.. 2019. “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice.” New York University Law Review 94(1):192233.Google Scholar
Seneviratne, R. 2021. “AI and Judicial Accountability: Challenges in Sri Lanka’s Legal Landscape.” Colombo Journal of Law and Society 5(2):3147.Google Scholar
Tonry, M. H. (editor). 2018. Why Punish? How Much?: A Reader on Punishment. Oxford: Oxford University Press.Google Scholar
Veale, M. and Edwards, L.. 2018. “Clarity, Surprises, and Further Questions in the GDPR’s Provisions on Automated Decision-Making and Profiling.” Computer Law & Security Review 34(2):398404.CrossRefGoogle Scholar