Socio-technical AI and the Reform of Recidivism Prediction
Risk assessment tools are used across multiple stages of the criminal justice process, for example, diversion from prosecution, pre-trial bail or remand decisions, classification and management of individuals in custody, and informing probation or parole supervision. Within this landscape, these tools can support the work of various judicial staff, from prosecutors and lawyers to court researchers, and even probation officers. However, this article centres on their use in courts, examining specifically how judges engage with these tools as part of their decision-making process. Within criminal justice systems, judges are entrusted with making a series of intuitive calculations to predict the probabilities of risk for future reoffending, and how an individual will behave in the community (DeMichele, Comfort, Barrick et al., 2021).
These high-stakes decisions, however, are made under conditions that are rarely ideal: information can be incomplete, record-keeping fragmented across agencies (such as police, probation, social or health services), or historically shaped by inequalities that reproduce themselves in the data utilised (systematic biases in police surveillance, arrests and convictions), whether related to decisions concerning pre-trial conditions, sentencing, rehabilitation or release. Risk assessment is employed at various stages of judicial case management to predict outcomes and in turn support more “reliable” and informed decision-making, however, the approach to this assessment has a direct implication for fairness and transparency in judicial systems.
Early practices relied on unstructured professional judgement, what is frequently referred to as first-generation assessment, were primarily based on personal views or professional “gut feelings” that have been shaped by education and experience (Bonta, 1996). As a result, that approach lacks standard protocols and is considered highly susceptible to bias. On the other hand, actuarial approaches, also referred to as second-generation risk assessments, classify risks utilising structured statistical models, which can result in greater consistency and standardisation in decision-making (Barbaree et al., 2006; Harris et al., 2003). More recent developments in risk assessment include thirdgeneration structured risk/needs instruments, which integrate both clinical judgment and actuarial methods. Fourth-generation tools build on this approach by more effectively accounting for individual responsivity to interventions.
Together, these instruments support treatment planning and service delivery, in addition to informing case management decisions and determining the appropriate levels of supervision throughout the course of a case (Burman, Armstrong, Batchelor et al., 2007). While these tools are generally regarded as objective and authoritative tools in judicial and correctional contexts, the scores they produce can still misrepresent risks in ways that undermine their perceived legitimacy (Esthappan, 2024); misinterpreting relevant risk factors, socio-demographic and gender disparities, as well as long-term trends. A growing body of research highlights that when judicial professionals rely on datasets that encode incomplete data and systemic biases, any assessment of risk built on that data can mirror and reinforce those inequalities. That is, unless design, governance, and use are explicitly oriented toward fairness and transparency (Arowosegbe, 2023; Dancy & Zalnieriute, 2025; FRA, 2020).
Risk Assessment in the Era of AI
In the age of machine learning, risk assessments are entering an era of AI-based prediction, wherein we are seeing an increase in the use of algorithms trained on large datasets to produce risk profiles. This shift brings both potential for improved accuracy and heightened concern about opacity and discrimination. The widely discussed State v Loomis (881 N.W.2d 749) case illustrates these challenges: a defendant challenged the use of a proprietary risk-assessment score in sentencing because neither he nor the court could scrutinise the factors and weights used to generate the score (State v. Loomis, 2016). In terms of more specified discrimination, subsequent investigations, such as analyses of a proprietary risk-assessment system on defendants in Broward County, Florida, revealed that although overall accuracy was similar across racial groups, false-positive and false-negative error rates differed markedly with black defendants who did not reoffend nearly twice as likely to be misclassified as high risk (Larson, et al., 2016). Such disparities are a mathematical consequence of balancing competing fairness metrics in the presence of different base rates of recidivism (Chouldechova, 2017).
Principles to address these issues are now reflected in federal guidance, such as the United States Department of Justice’s analysis of AI in criminal justice (U.S. DoJ, 2024), which highlights the need for transparency, accountability, and safeguards against bias in predictive tools and risk assessments. Similarly, the OMB’s Memorandum M-24-10 (“Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence”; Office of Management and Budget, 2024)), and the National Institute of Standards and Technology (NIST) AI Risk Management Framework (NIST, 2024) reinforces expectations of explainability, fairness, and rights protective deployment across high impact public sector applications, including correctional settings.
In parallel, inconsistent definitions of recidivism and heterogeneous recording practices complicate cross-case and cross-jurisdiction comparisons; the very context in which judges and policy-makers seek coherence (Yukhnenko, Farouki, & Fazel, 2023). Institutions such as the European Union Agency for Fundamental Rights (FRA) have further warned that AI-assisted tools used by public authorities can affect fundamental rights where datasets or models are opaque or unrepresentative. This underscores an obligation to ensure non-discrimination, transparency, and meaningful human oversight (FRA, 2020). The Council of Europe similarly stresses, in its ‘Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment’, that although digital tools, including AI in courts, are becoming more prevalent, human oversight remains essential (CEPEJ, 2018). This aligns with the principle of “human control”, which holds that the ultimate responsibility and decision-making authority must rest with human decision-makers to safeguard professional autonomy and citizens’ rights (Democritis University of Thrace, 2025).
Legal consideration must also be given to the EU AI Act (Regulation (EU) 2024/1689) and Recommendation CM/Rec(2024)5 on the ethical and organisational aspects of the use of AI and related digital technologies by prison and probation services. Both emphasise the necessity of transparency, traceability, equality and non-discrimination in order to presuppose high-integrity data and explainable decision support (Entcheva & Mazilescu, 2024). In this regard, the Commission provides further guidelines on prohibited artificial intelligence practices, however, they note a specific exception for supporting human-led risk assessment and the prediction of criminal offences; provided such assessments are based on objective and verifiable facts directly linked to a criminal behaviour (Article 5(1)(d) AI Act, p. 66). To uphold these standards, judges and decision-makers can benefit from the use of AI tools for recidivism prediction that support them in recognising related criminogenic and non-criminogenic patterns, identify blind spots, and reason with clearer, more coherent and robust evidence based on historical data – AI tools should therefore be enhancing, rather than replacing, judicial autonomy and accountability.
Achieving Fair and Transparent Assessments: The FAIR-PReSONS Solution
In order to support judges distinguish lower-risk individuals who may not need custody, and focus attention and support to higher risk individuals where it can most realistically reduce harm and improve reintegration outcomes, AI-based risk prediction tools should meet two basic conditions: transparency and bias control. Transparency upholds that the factors informing an assessment, and the limits of what it can conclude, are clear enough for judges and practitioners to scrutinise and challenge.
Bias control infers checking for, and reducing where possible, uneven error patterns or disparate impacts across groups rather than relying on overall accuracy alone. This logic underpins the FAIR-PReSONS solution being developed by an international partnership of experts in risk assessment, AI developers and regulators and justice-sector staff. The solution is built around four core priorities. The first is improving data quality through a systematic methodology for the collection, digitisation and semantic enrichment of data from offender management systems in Greece, Portugal and Bulgaria.
The second is fairness, including integrating gender-sensitive variables and identifying disparities. The third is explainability, so that participants can interpret an assessment rather than defer to it. The fourth is capacity-building, because no justice-sector tool is responsible unless the people using it are trained to understand both its value and its limits. These priorities align closely with the strict requirements governing the processing of personal data for criminal justice purposes in the European Union. Article 10 of the GDPR restricts the processing of data relating to criminal convictions and offences to circumstances under the control of an official authority or pursuant to Union or Member State law that provides appropriate safeguards.
To ensure GDPR alignment, FAIR PReSONS incorporates a privacy by design approach in its development: sensitive identifiers are minimised; access is restricted; and synthetic (Portugal) or anonymised (Greece and Bulgaria) datasets are used for development and testing wherever possible. Documented data flows and robust audit trails further ensure compliance and accountability.
Building a Real-world Logic and Explainable System
An additional central innovation of the solution is the Reoffending Risk Assessment Ontology (RRAO), which organises information about people, decisions, sanctions1 and sentence execution, and institutional pathways in a consistent way. A shared structure as the backbone of the solutions’ data portal further helps to bring those records into the same frame, which ensures analysis is more coherent and easier to scrutinise.
The RRAO captures the complexity of the criminal justice lifecycle in semantic classes and logic, linking data under correctional and judicial institutions, legal outcomes (including court decisions), human subjects and historic profiles, and sentence-execution information. This semantic enrichment in data collection and treatment is a first step toward bias mitigation. Based on a Human-Centered Ontology Engineering (HCOME) methodology, the model blends human expertise with language-model assistance for scaffolding. By incorporating variables like occupation and education level, socioeconomic factors linked to recidivism, into a transparent knowledge graph2, the framework distinguishes between a propensity for crime and the systemic effects of poverty (FAIR-PreSONS, 2026). Maintaining this mapping at the forefront reduces the risk of a “black box” analysis built on unknown or mis-specified variables such as race or gender.
The system also uses an adversarial debiasing approach to detect whether protected characteristics can still be inferred from the prediction or whether the system’s outcomes fall unevenly across groups. The reasoning is adjusted to reduce that dependency. It then combines this semantic structure with a hybrid modelling approach that joins the ability to represent complex relationships in the data with knowledge graphs and the ability to learn patterns of neural architectures. This “hybrid AI” approach overcomes the limitations of symbolic AI (rigidity) and neural AI (lack of explainability).
Result: A Judicial Co-pilot for Decision Support
The system’s practical model is intelligent analysis, variable weighting and visualisation, keeping the system squarely in the category of decision support rather than autonomous decision-making. It does not determine outcomes; rather, it handles data-intensive tasks, helping to identify blind spots and present evidence that can be weighed alongside legal considerations and the particulars of each case.
The result is a “co-pilot” user interface, primarily for use by judges, alongside their ethical and legal reasoning. Its utility can also be extended to other justice professionals involved in decisions wherein risk of recidivism are pertinent, such as prosecutors, lawyers, court researchers, probation professionals).
FAIR-PReSONS represents a pioneering effort to modernise recidivism prediction. By moving away from proprietary “black boxes” toward an open-source, semantically enriched socio-technical framework, the initiative addresses algorithmic discrimination while adhering to the rule of law. The methodologies piloted here provide a blueprint for high-risk AI under the EU AI Act, ensuring that accuracy, fairness and transparency are achieved through rigorous design and ethical governance.
1 Here “sanctions” refers broadly to custodial and non‑custodial penalties imposed by courts.
2 A knowledge graph is a structured network of entities and relationships represented as nodes and edges in which semantic meaning is explicit.
References
Angelis, S., Pinho, J., Sykiotou, A., Markov, D., Chatzistamatis, S., Spirou, S., Tsekouras, G., & Kotis, K. I. (2025, July). RRAO: An ontology for the representation of reoffending risk assessment knowledge. In 2025 16th International Conference on Information, Intelligence, Systems & Applications (IISA). https://doi.org/10.1109/IISA66859.2025.11311249
Arowosegbe, J. O. (2023). Data bias, intelligent systems and criminal justice outcomes. International Journal of Law and Information Technology, 31(1). https://academic.oup.com/ijlit/article-abstract/31/1/22/7224628?redirectedFrom=fulltext&login=false
Burman, M., Armstrong, S., Batchelor, S., McNeil, F., & Nicholson, J. (2007). Research and practice in risk assessment and risk management of children and young people engaging in offending behaviour. Scottish Centre for Crime and Justice Research. https://www.sccjr.ac.uk/wp-content/uploads/2009/01/Research_and_Practice_in_Risk_Assessment_and_Risk_Management.pdf
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
Council of Europe, Committee of Ministers. (2024, October 9). Recommendation CM/Rec(2024)5 of the Committee of Ministers to member states regarding the ethical and organisational aspects of the use of artificial intelligence and related digital technologies by prison and probation services. https://search.coe.int/cm?i=0900001680b1d0e4
Dancy, T., & Zalnieriute, M. (2025). AI and transparency in judicial decision-making. Oxford Journal of Legal Studies. Advance online publication. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5331491
Democritis University of Thrace. (2025, September 22). Respecting and strengthening human autonomy when using artificial intelligence in the administration of justice. https://fair-presons.aegean.gr/respecting-and-strengthening-human-autonomy-when-using-artificial-intelligence-in-the-administration-of-justice/
DeMichele, M., Comfort, M., Barrick, K., & Baumgartner, P. (2021). The intuitive-override model: Nudging judges toward pretrial risk assessment instruments. Fed. Probation, 85, 22. https://www.uscourts.gov/sites/default/files/85_2_4_0.pdf
Entcheva, K., & Mazilescu, I. (2024). Artificial intelligence and digitalisation of judicial cooperation: The main provisions in recent EU legislation. eucrim: The European Criminal Law Associations’ Forum, 19(3), 202–205. https://doi.org/10.30709/eucrim-2024-018
Esthappan, S. (2024). Assessing the risks of risk assessments: Institutional tensions and data-driven judicial decision-making in US pretrial hearings. Social Problems, 60, 1–15. https://doi.org/10.1093/socpro/spae060
European Commission for the Efficiency of Justice (CEPEJ). (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe. https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c
European Commission. (2025). Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act). https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act
European Union Agency for Fundamental Rights. (2020). Getting the future right – Artificial intelligence and fundamental rights. https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights
FAIR-PReSONS. (n.d.). Future-proofing the legal profession: Fostering AI understanding, competence, and skills across the criminal justice sector. Retrieved January 14, 2026, from https://fair-presons.aegean.gr/future-proofing-the-legal-profession-fostering-ai-understanding-competence-and-skills-across-the-criminal-justice-sector/
Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
National Institute of Standards and Technology. (2024, July 26). Artificial intelligence risk management framework: Generative artificial intelligence profile (NIST AI 600-1). https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
State v. Loomis, 881 N.W.2d 749 (Wis. 2016). https://law.justia.com/cases/wisconsin/supreme-court/2016/2015ap000157-cr.html
U.S. Department of Justice. (2024). Artificial intelligence and criminal justice: Final report. https://www.justice.gov/olp/media/1381796/dl
Yukhnenko, D., Farouki, L., & Fazel, S. (2023). Criminal recidivism rates globally: A 6-year systematic review update. Journal of Criminal Justice, 88, Article 102115. https://www.sciencedirect.com/science/article/pii/S0047235223000867?via%3Dihub
Article Provided by ITLM
External Link: https://justice-trends.press/socio-technical-ai-and-the-reform-of-recidivism-prediction/