Going Towards a Recidivism Fair Prediction System
The FAIR-PReSONS platform has been designed as a decision-support system that places user experience (UX), transparency, and legal defensibility at the core of AI-assisted recidivism assessment. Rather than replicating opaque “risk score” tools, the system focuses on supporting judicial and correctional professionals with clear, contextual, and explainable outputs that can be critically assessed and responsibly used. The interface reflects the project’s central objective: to make fairness-aware AI usable in real judicial workflows, without obscuring uncertainty or replacing human judgment.
From the very first interaction, the platform guides users through a structured and jurisdiction-aware workflow. The initial screen allows the selection of the relevant national context (Greece, Bulgaria, or Portugal), ensuring that assessments are grounded in the appropriate legal, correctional, and data patterns of each country. This design choice directly responds to the project’s aim to avoid one-size-fits-all risk models and to respect national justice systems. The form-based input flow is intentionally simple, using familiar legal and correctional categories (such as sentence length, penal situation, or crime category) to minimize cognitive load and reduce the risk of misinterpretation during data entry.
Once inputs are provided, the results screen translates complex model outputs into layered, readable insights. Risk is presented both as a qualitative classification (e.g. low, medium, high) and as a probabilistic estimate over different time horizons. This dual representation allows users to quickly grasp overall risk while still engaging with quantitative evidence when needed. Importantly, the UX avoids deterministic language: results are framed as likelihoods and scenarios, reinforcing the tool’s role as advisory rather than prescriptive.
A defining feature of the FAIR-PReSONS interface is its built-in explainability. Visual explanation components, such as contribution diagrams, show how specific factors increase or decrease estimated risk. These explanations are not generic add-ons, but a direct reflection of the fairness-aware and explainable modeling approach developed in WP4, where interpretability and auditability were treated as core system requirements rather than optional features. By surfacing feature contributions in a structured way, the platform enables judges, probation officers, and legal researchers to scrutinize outcomes, challenge assumptions, and document reasoning.
The UX also embodies the project’s commitment to fairness and legal compliance. Sensitive attributes are handled with care, and the interface reflects the underlying bias-mitigation strategies by clearly communicating what the system considers relevant, and why. Instead of hiding complexity, the design makes trade-offs visible, aligning with European legal expectations around transparency, non-discrimination, and the right to explanation. This is particularly important in high-risk AI contexts, where trust depends not only on accuracy but on accountability and contestability.
In terms of use cases, the platform is intended to support pre-release assessment, probation planning, and policy analysis, rather than automated decision-making. Its modular design allows it to be used in exploratory “what-if” scenarios, training contexts, and comparative assessments across cohorts, supporting evidence-based reflection rather than rigid outcomes. This aligns with the broader FAIR-PReSONS objective of complementing professional expertise with responsible AI, not substituting it.
Overall, the FAIR-PReSONS system demonstrates how careful UX design can operationalize ethical and legal principles in AI-driven justice tools. By combining jurisdiction-sensitive inputs, layered risk communication, and built-in explainability, the platform translates advanced fairness-aware machine learning into a usable, transparent, and trustworthy decision-support environment. More information about the project and its ongoing development is available on our website.
Article provided by ITLM