main slide 5

Towards a Fair and Explainable Risk Assessment System for Recidivism Prediction

Artificial intelligence is increasingly being considered as a valuable tool to support decision-making in criminal justice systems. Among its most challenging and socially impactful applications is the prediction of recidivism—the likelihood that an individual will re-offend. The FAIR-PReSONS project envisions an advanced AI-based risk assessment system designed to offer predictive insights that are not only technically robust, but also fair, transparent, and sensitive to legal and societal contexts across Europe.

A key focus of the initiative is establishing a flexible and accurate model of recidivism that reflects different legal definitions and judicial frameworks. Countries differ in how they understand and record re-offending: some prioritize judicial recidivism, such as a new conviction, while others emphasize penitentiary recidivism, such as reentry into prison. These variations influence both the data collected and the modeling strategies needed, and the project plans to adapt its approach accordingly.

At the heart of the system lies a comparative methodology. Based on datasets acquired from Greece, Bulgaria, and Portugal, the project will analyze how certain static variables—such as age, gender, offense type, and sentence history—correlate with future re-offending. While some existing risk assessment tools incorporate dynamic factors like behavioral patterns or social influences, the FAIR-PReSONS approach emphasizes comparability and cross-national validity. The use of well-defined, consistently available variables across jurisdictions supports the development of a unified yet adaptable model, while also setting the foundation for incorporating more complex indicators in future iterations.

The upcoming system architecture is being designed with fairness and explainability at its core. A dedicated inference engine will be developed to select models tailored to national contexts and return risk predictions alongside understandable, human-readable explanations. This commitment to transparency will enable end-users—such as legal professionals and decision-makers—to interpret the outputs and ensure accountability in practice. Bias mitigation mechanisms will also be embedded into the model training and selection phases, aligning the system with broader principles of justice and equality.

In the months ahead, the project will focus on refining its model specifications, conducting literature-informed variable weighting, and finalizing the technical framework to support secure and explainable predictions. This includes the development of a model registry, an API infrastructure, and a user interface that facilitates responsible interaction with the system.

By grounding its work in comparative legal realities and committing to ethical AI principles, the FAIR-PReSONS project aims to offer an innovative tool that contributes meaningfully to evidence-based judicial decision-making across Europe.

Article provided by ITML


References
Berk, R. A., & Bleich, J. (2013). Statistical procedures for forecasting criminal behavior: A comparative assessment. Criminology & Public Policy, 12(3), 513–544.
Tollenaar, N., & van der Heijden, P. G. M. (2013). Which method predicts recidivism best? Journal of the Royal Statistical Society: Series A, 176(2), 565–584.
FAIR-PReSONS Project. (2024). Articles and Updates. https://fair-presons.aegean.gr/articles/

Skip to content