Author(s)

Summer Walker

AI technology as a tool in enhanced law-enforcement practices needs to be controlled through a regulated ethical framework of deployment, so that use of cyber technology in policing does not infringe human rights. The UN should be the driving force in guiding the ethical debate.

Artificial intelligence, through technologies ranging from enhanced data collection to robotics, is quickly changing the face of modern law enforcement. In some countries, new law-enforcement techniques are being rolled out by agencies faster than many legislative bodies can keep track of, let alone regulate. In others, the state is driving comprehensive approaches to AI in policing. Policymakers, and society in general, are often in a reactive position, facing a steep learning curve when it comes to emerging technology such as artificial intelligence. Yet, in many ways, these new technologies have the ability to dramatically reshape the relationship between state security forces and society, particularly in the domain of social and political control.

The UN plays two key roles in this area. First, largely through its agencies, the UN plays a role in transferring law enforcement best practices through technical cooperation, capacity building and research. Secondly, states engage in collective debate through the UN to establish norms on global phenomena.

The UN system has already begun to examine the implications of AI in law enforcement. The UN Interregional Crime and Research Institute’s (UNICRI) Center for Artificial Intelligence and Robotics and INTERPOL held a meeting in July 2018 exploring the opportunities and challenges of using AI and robotics in law enforcement, with technologies ranging from autonomous patrol vehicles to behaviour-detection tools. In a guide on safer cities, the UN Office on Drugs and Crime (UNODC) has encouraged states to consider the ‘responsible use of artificial intelligence and data analytics in support of pro-active and problem-oriented policing’. At the Fourteenth Crime Congress, member states will discuss new technologies as a means for and tool against crime at a workshop on current crime trends, recent developments and emerging solutions. As the UN system engages on this, the general enthusiasm for these new technologies should be shaped by a sober debate about potential societal impacts that could be difficult to reverse.

AI in policing: State surveillance and transparency

Some of the risks posed by AI in the context of law enforcement are that this type of technology could further entrench existing biases, help governments target specific population groups, increase the tendency for state surveillance and lower the threshold for use of force. For instance, predictive policing uses machine learning to both predict the potential criminal behaviour of individuals and identify high-crime areas. Although this kind of technology could help identify patterns of organized crime, critiques of these software systems say they include a built-in bias, particularly when using historical data to identify ‘bad’ neighbourhoods, that they lack transparency around how the algorithms are built and how they operate, and lack accountability when decisions are made using predictive models.

For example, a pioneer of predictive policing Jeffrey Brantingham is trying to classify gang-related crimes in Los Angeles using machine learning, the police department’s criminal data and a gang territory map. However, the resources used and the subjective nature of the task have been criticized. The map dates back to 2009, whereas the crime data is from 2014–2015. In addition, the tool transforms the narrative description of the crime into a mathematical element of the analysis. As one observer noted, ‘gang-related’ is a very subjective term, so the data may only indicate whether officers of the Los Angeles Police Department would label a crime as gang-related, and therefore reflect inherent bias. Meanwhile, in other cities, such as New Orleans, predictive policing programmes have been stopped due to their secretive nature and lack of public oversight. Additionally, both the Los Angeles and New Orleans programmes derive from Pentagon-linked programs or companies, raising concerns about the larger constellation of security objectives for these technologies.

In China, the use of AI technologies in law enforcement has been widely reported, as has the existence of re-education camps, where, predominantly, minority Uighur Muslims are interned in the Xinjiang region. Less reported on is the use of the Integrated Joint Operations Platform, a big-data system that collates citizens’ everyday behaviours, to track people in Xinjiang Province. This predictive system uses data from CCTV footage, wi-fi sniffers (which collect unique IP addresses), security checkpoints, visitors’ management centres, personal information, such as banking or family planning, passport travel information and data from unannounced home visits to assess whether an individual is deemed a threat to society. The system then generates lists of people to detain and possibly send to the re-education camps. One interviewee told Human Rights Watch that he knew that personal details ‘for every Uighur in that district’ were stored in computers, including names, gender, ID numbers, occupation, familial relations and whether a certain person is trusted or not trusted, etc.

From this example alone, it is evident that the risks posed to vulnerable groups by state apparatuses using AI are manifold as these technologies enter the security sector, often with little or no transparency in how they are deployed.   

Discussions on the ethical uses of AI are growing among both states and the private sector. Basic guidelines on AI development have been drafted by the Japanese Society for Artificial Intelligence. The April 2018 EU Declaration on Cooperation on Artificial Intelligence includes the need to develop an adequate legal and ethical framework. A joint statement on AI signed by Canada and France at the July 2018 G7 meeting calls for the creation of an international study group to serve as a global point of reference for understanding AI and considering its impacts on citizens. As for the private sector, Microsoft’s president recently released a statement calling for government regulation and corporate responsibility around facial recognition technology in light of the risks posed to the right to privacy and freedom of expression. And draft standards for what is referred to as ‘ethically aligned design’ by the Institute of Electrical and Electronics Engineers offer recommendations for law enforcement on the use of automated weapons. These initiatives may be the first steps towards a more diversified global conversation about AI.

The UN, AI and the future

As the UN increases its role as a forum for states to share technical knowledge about the uses of AI in law enforcement, it should also see itself as an appropriate organization to shape an ethical framework within which such cooperation takes place. The UN brings together national and regional perspectives on the issues, and has the ability to connect stakeholders – including civil society groups, research institutes and the private sector – with governments in the discussion. UNICRI plans to follow up its July 2018 meeting with an event which will highlight ethics. This is a welcome effort, and an opportunity to develop a dialogue within the UN’s crime agenda on the ethical concerns around automating and augmenting law-enforcement capabilities. The engagement of the private sector, civil society organizations and human rights groups would be key to steer discussions on technologies potential impacts on citizens’ rights. As the UN seems set to move further into this space, safeguarding the rights of citizens will require that multilateral cooperation among states is framed through an ethical lens.