RRAO: An Ontology for the Representation of Reoffending Risk Assessment Knowledge
Reoffending risk assessment plays a pivotal role in judicial procedures, influencing decisions related to parole, sentencing, and rehabilitation. Accurate risk assessment supports public safety and the reintegration of offenders into society. AI prediction systems have been increasingly used to assess the risk of reoffending, offering data-driven insights to support judicial decision-making. However, some of these tools have been subject to growing criticism due to potential biases and a lack of transparency, raising concerns about their fairness and social impact [1] [2]. The intersection of criminal justice and AI predictive systems that deliver accurate predictions, but also uphold fairness and accountability, requires robust and reliable representations of the underlying knowledge to ensure that they align with principles of fairness, transparency, and equity. Existing predictive models and datasets often embed systemic biases, leading to unfair outcomes [3] [4]. Such limitations highlight the need for approaches to represent knowledge related to reoffending in a comprehensive and unbiased manner. The development and incorporation of ontologies play a pivotal role in enhancing AI explainability, enabling stakeholders to better understand and trust the decisions made by predictive systems, in addition to their role in semantically annotating and integrating heterogeneous datasets related to reoffending and recidivism.
Our recent research paper presents the RRAO (Reoffending Risk Assessment Ontology), engineered to provide a comprehensive, formal, and reusable representation of structured knowledge related to reoffending and recidivism. To the best of our knowledge, there is no ontology focusing explicitly on this domain. By addressing this gap, the RRAO aims to serve as a valuable resource for researchers and practitioners, offering a reusable, extensible, and fair knowledge representation of the domain. The ontology was engineered in the context of FAIRPReSONS e-Justice EU-funded project, following the XHCOME methodology. While RRAO itself is not a prediction tool, it is designed to support the project’s primary goal, which is the creation of a bias-free AI system for the fair prediction of reoffending risk, emphasizing a gender equality perspective and conforming to EU legislation for nondiscriminatory AI.
Article provided by UAEGEAN, i-Lab
References
[1] G. van Dijck, “Predicting recidivism risk meets AI Act,” European Journal on Criminal Policy and Research, pp. 407–423, 9 2022.
[2] C. McKay, “Predicting risk in criminal procedure: actuarial tools, algorithms, AI and judicial decision-making,” Current Issues in Criminal Justice, vol. 32, 1, 2020. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/10345329.2019.1658694
[3] C. Rudin, C. Wang, and B. Coker, “The age of secrecy and unfairness in recidivism prediction,” Harvard Data Science Review, vol. 2, p. 2020,1 2020.
[4] “The dangers of risk prediction in the criminal justice system,” MIT Case Studies in Social and Ethical Responsibilities of Computing, 2 2021.