Technologies that make use of forms of artificial intelligence (AI) are rapidly changing the face of law enforcement around the world. These technologies range from robotics and enhanced forms of data collection to so-called predictive policing, which uses data analysis tools to estimate the likelihood of criminal behaviour of individuals and anticipate high-crime levels. In terms of the policing of organized crime, tools that assist in the mapping of criminal networks and in predicting new patterns of behaviour aim to be transformative.

Yet concerns are widely being voiced about the ethical issues and risks associated with the use of such technologies. The use of technology such as facial recognition to increase exponentially the capacity of state surveillance by helping governments target specific groups or increase racial and ethnic profiling, and lower the threshold for police intervention and the use of force, has become a very real issue. The exploitation of these technologies by states with little concern for human rights is a new and sudden threat that the international community must confront.

The UN is incorporating the topic of AI and other emerging technologies into discussions across the board, including talks on policing. However, the fast-changing landscape of these technologies poses a challenge for a bureaucratic system that is not famed for its swiftness of response or adaptability. As a result, policymakers, particularly those mediating between states, are often placed in a reactive position.

In an attempt to engage states in debate on the norms of using AI in policing work, the UN Interregional Crime and Justice Research Institute (UNICRI) Centre for Artificial Intelligence and Robotics held a meeting with INTERPOL in July 2018 to explore the opportunities and challenges of using AI and robotics in law enforcement, and showcase technologies ranging from autonomous patrol vehicles to behaviour-detection tools.

At the meeting, the Global Initiative spoke to Irakli Beridze, head of the Centre for Artificial Intelligence and Robotics and a leading UNICRI representative, about raising awareness on AI and the opportunities and risks posed by its use in law enforcement.

GI: The question of AI and robotics is a relatively new policy space, particularly for the UN. Within the UN system, have you found it a struggle to bring together stakeholders willing to take part in a dialogue in this area?

IB: If anything, quite the opposite. As an emerging field, there seems to be a shared interest in widening the discussion and engaging stakeholders from various sectors. Most countries recognize that AI tools are in the early stage of development, and they want to learn from one another – and from the private sector. I’ve found that the private sector has also been keen to join in the dialogue and learn how the UN is addressing these issues. It would be difficult to have purposeful discussions without practitioners and private-sector participants who are tabling AI and robotics on their agendas. In my experience, private-sector and government representatives have been able to find a shared way to discuss the technology.

GI: Tell us about UNICRI’s activities in the area of AI.

IB: The discussion around AI and criminal activity is still in its early stages, as the technologies are emerging as we speak. Since we opened the centre in 2017, UNICRI’s focus has been on criminal justice, crime prevention and security in general. Previously, the UN had not had much involvement in AI discussions, so we have begun raising a general awareness, beginning with an event at the UN General Assembly on AI in 2015, and again the following year. We also organized an event with the Organisation for the Prohibition of Chemical Weapons, to address how emerging AI technology will have an impact on the implementation of the international non-proliferation treaties. We’ve also held training sessions for the media and the security industries.

GI: You recently co-hosted a meeting with INTERPOL’s Innovation Centre on the opportunities and risks posed by AI and robotics for law enforcement. What was the objective?

IB: This was the first global meeting to discuss how AI and robotics can be used by law enforcement and to address the potential challenges in this area. Since these discussions are in the early stages, the conversations ranged from the technical to the ethical.

Countries were eager to learn from the private sector about their new technologies. Attendees learnt about applications such as virtual autopsy tools and visual-recognition applications that can be used to trace stolen vehicles. We also discussed how AI applications can be used by criminals and how to bring this information to law enforcement and criminal-justice practitioners. For instance, there are potential risks for prosecuting organized and non-organized criminal cases. One example is impersonation, where video footage can be faked and used for fraudulent purposes. In the future, it’s possible this could bring into question the admissibility of video footage as evidence in criminal cases.

Our attendees also wanted to know how the private sector was handling the ethical and legal considerations in their work in this field. In addition to examining the potential ways in which criminals can harness AI technology for their own ends, and how law enforcement can respond to this form of crime, one of the main aspects of the meeting was to look at how we might frame these kinds of challenges within ethical boundaries.

GI: And what are the main contours of ethics within the AI debate?

IB: Everything we do must fit within – or, at the very least, not breach – the international human-rights framework of international humanitarian law. There are a number of other issues too, such as security and right to privacy, where individual countries may have different understandings of these ethical notions. At the INTERPOL meeting, many of the law-enforcement officials were interested to learn about the debates around ethics and AI and how the issues arising from these debates might be applied. We will carry this discussion forward and host a meeting on ethics and AI for law enforcement in January 2019 in Singapore.

GI: What should the role of the UN be in terms of addressing AI-related issues?

IB: As time progresses, AI is likely to play a major part in most spheres of our lives. The role of the UN should be threefold. First, it should raise awareness of AI. Discussions of AI are dispersed and not linked together. I’d like to see more AI debates brought to the developing world and countries in transition, including more discussion on the social and economic impacts of AI. Secondly, we should maximize the benefits of AI-driven technologies to contribute to the Sustainable Development Goals and other UN-led projects. For instance, the 2018 AI for Good Global Summit was attended by 32 UN agencies to discuss how applications of AI can contribute to the SDGs. Thirdly, the UN must identify the risks involved, whether minimal or substantial in nature, and start comprehensive discussions related to those risks.