Future-Proofing the Legal Profession: Fostering AI Understanding, Competence, and Skills across the Criminal Justice Sector
Advances in AI innovation find a wide range of applications in the legal sector. Professionals can already make use of various technologies including natural language processing (e.g. tools for voice-to-text conversion, text processing, translation engines), generative AI (e.g. large language models such as ChatGPT), process automation, biometric recognition, and automated decision-making.
Whilst the availability of AI systems continuously grows, capacity building efforts to ensure that legal professionals have access to appropriate training so that they can make informed choices and benefit to the fullest from AI opportunities are noticeably lagging behind. A recent global survey (spanning 96 countries) among judicial operators, including judges, prosecutors, and lawyers commissioned by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) shows that whereas 92 per cent of respondents are familiar AI tools and 44 per cent are actively using them in their everyday practice, only 9 per cent report that their organizations have issued guidelines or provided AI-related training.[1] This is problematic, particularly in the light of the various risks for human rights that the use of AI systems can raise such as algorithmic bias resulting in discrimination or unfair treatment, and inaccuracies that can negatively impact the judicial process.
The US National Center for State Courts (NCSC) has issued a guidance document to raise awareness of the possible uses of AI systems and generative AI in the context of criminal justice by elucidating the potential limitations and risks of these technologies, and highlighting approaches and measures that can be adopted to mitigate such concerns.[2] UNESCO has developed a comprehensive practical toolkit for judicial professionals comprising four modules which can be adapted to different training formats ranging from facilitated workshops and webinars to self-study.[3] A similar pertinent initiative includes the collaborative toolkit for ethical AI innovation in law enforcement developed by INTERPOL and the United Nations Inter-Regional Crime and Justice Research Institute (UNICRI) which seeks to support the responsible implementation of AI technologies in the realm of security, policing, and fight against crime.[4]
Making high-quality AI-related training for judicial practitioners widely available is an essential first step to harmonizing standards and practices across countries. To enhance the efficiency of ethics training for AI use, it is recommended that greater attention is given to the development of “actual harms” frameworks vis-à-vis principle-based frameworks. Unlike principle-based frameworks which focus mainly on the general intentions of AI development, an “actual harm” approach aims to identify, measure, and analyze the exact harm and then carefully and thoroughly address it before implementing the AI system.[5] For example, applying this approach in the context of correctional services entails understanding how AI-related risks, such as algorithmic bias may negatively impact certain individuals, or prison populations, and taking proactive steps to limit these negative impacts before introducing the AI system.
The FAIR-PReSONS initiative features a solid training development and delivery component and will contribute to the efforts to make judicial practitioners better prepared and skilled to take full advantage of the opportunities offered by advancing AI technologies in ways that promote ethical innovation and uphold human rights.
Article provided by Center for the Study of Democracy
[1] UNESCO Survey Uncovers Critical Gaps in AI Training Among Judicial Operators. Press Release, 19 June 2024.
[2] AI Rapid Response Team of the National Center for State Courts. (2024). Artificial Intelligence: Guidance for Use of AI and Generative AI in Courts. (NCSC),7 August.
[3] UNESCO. (2023). Global Toolkit on AI and the Rule of Law for the Judiciary.(UNESCO).
[4] INTERPOL and UNICRI. (2025). Toolkit for Responsible AI Innovation in Law Enforcement. (UNICRI),12 March.
[5] Cameron, R. (2024). Correctional AI: Promise, Risks, and the Way Forward. Corrections Today. Fall. 24-29.