The United Nations Interregional Crime and Justice Research Institute assists organizations in their efforts to formulate and implement improved policies in the fields of crime prevention and justice administration. The UNICJRI has recently published the “Artificial Intelligence Collection.” This report follows two related reports from the UN agency, “Artificial Intelligence and Robotics for law Enforcement” (2019) and “Artificial Intelligence: an overview of state initiatives” (2019).
The UNICRJRI report on AI states that the “potential of the Artificial Intelligence for law enforcement, legal professionals, the court system and even the penal system to augment human capabilities is enormous. However, we need to truly test the limits of our creativity and innovation to overcome the challenges that come with these technologies, as well as to develop entirely new approaches, standards and metrics that will be necessitated by them.
The Collection includes articles from innovative minds in academia to stimulate discussion and promote solutions to the challenges “we face in this emerging domain on how to shape the design of the policies and legal frameworks of the future and provide guidance to those who will build the AI-based tools and techniques.”
One paper explains “AI’s dual nature makes it both a threat and a means to protect human rights and information technology systems. Amongst other issues, issues pertaining to the opacity and inclusion of potential biases in algorithms processes, as well as the inherent security vulnerabilities of such applications, unveil a tension between such technological pitfalls and the aptness of current regulatory frameworks.”
AI techniques in the criminal justice field raise far-reaching about fairness and bias. Properly applied, techniques could help those in the criminal justice field uncover the sources of bias and help ensure more fair and more just outcomes. But a lack of attention during deployment and throughout the life cycle of AI systems, could lead to embedded bias, greater opacity, and mission creep that allows systems without clear purpose specification to move far beyond their initial boundaries. These issues and others are explored in this important publication from the UNICRJRI.
Marc Rotenberg, Director
Center for AI and Digital Policy at Michael Dukakis Institute
The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy.