Artificial Intelligence and Criminal Justice: Why Legal Professionals Need a Code of Conduct
The rapid integration of artificial intelligence into criminal justice systems is one of the most critical developments in modern legal culture. Tools based on algorithmic models are already being used to predict the likely outcome of cases, assess the risk of recidivism, and support judicial officials in the decision-making process. This technology promises increased efficiency and rationality, but at the same time raises serious issues regarding transparency, fairness, and the protection of fundamental rights.
At the regulatory level, the European Union has already sought to address these challenges through Regulation (EU) 2024/1689 (AI Act), which introduces a risk-based approach. In this context, artificial intelligence systems used in criminal justice, particularly for risk assessment or to support judicial decisions, are classified as “high-risk” and are subject to strict obligations regarding transparency, accountability, and human oversight.
However, the existence of a comprehensive legislative framework is not sufficient on its own to ensure the fair and reliable use of artificial intelligence in the criminal justice system. The opacity of algorithms, the potential for bias to be embedded in the data, and the difficulty in understanding the results produced create an environment where formal compliance with the law does not guarantee substantive justice. Furthermore, the use of such tools directly affects fundamental rights, such as personal liberty, equality before the law, and the right to a fair trial.
This highlights the need to adopt a Code of Conduct specifically tailored to legal professionals. Such a code does not seek to replace legislation, but rather to serve as a supplement, providing practical guidelines for the responsible use of artificial intelligence systems. Its contribution is crucial, as it bridges the gap between abstract legal rules and everyday professional practice.
The fundamental principles that should govern the use of artificial intelligence in criminal justice are multifaceted and interrelated. First, human oversight must remain central, ensuring that final decisions are made by human beings and not by algorithmic systems. Second, transparency and explainability are essential so that both legal professionals and citizens can understand the reasoning behind the generated assessments. Third, the principle of non-discrimination requires avoiding the reproduction of social inequalities through biased data or models. Fourth, accountability ensures that there is a clear allocation of responsibility for every decision influenced by artificial intelligence.
Beyond these, the principle of proportionality (which requires that the use of technology be limited to what is strictly necessary) and the principle of technical robustness and reliability (related to the accuracy and security of systems) are of particular importance. Finally, the principle of contestability (the right to review and challenge algorithmic assessments) is also crucial, as it allows those affected by an algorithmic assessment to review and challenge it.
Artificial intelligence cannot and should not replace legal professionals; rather, it must remain a tool that supports, but never substitutes, human judgment. In this context, the adoption of a Code of Conduct constitutes a significant step toward the responsible and human-centered use of artificial intelligence.
The transition to a digitally enhanced criminal justice system is not merely technological. It is deeply institutional and value-based. And in this transition, ethics is not an option, but a necessity.
Article provided by DUTH