In December 2023, videos of Lee Hsien Loong, prime minister of Singapore, and Lawrence Wong, the deputy prime minister, were circulated online to promote crypto and investment products. These images turned out to be deepfakes – AI-generated videos designed to fake their identities.

In early 2022, Thai criminals were found to be using deepfakes to impersonate police officers in extortion video calls. And in February 2024, the Hong Kong office of a multinational company lost US$25.6 million to a deepfake video conference call impersonating its chief financial officer. These are just a few of the many cases in the Asia-Pacific region where AI-generated images and audio have been used for malicious purposes, including fake kidnappings, sexual abuse material and fraudulent schemes.   

The term ‘deepfake’, a combination of ‘deep learning’ and ‘fake’, refers to hyper-realistic videos created to resemble real people. Deepfakes use neural networks that analyze extensive data sets to replicate a person’s facial expressions, behaviour, voice and speech patterns. By manipulating real footage and authentic-sounding audio, deepfakes are usually difficult to detect. Much of the software used to generate them is available on the open web.  

Deepfakes came to the fore in 2017, when videos of public figures, including Barack Obama and Mark Zuckerberg, appeared online. While some of these were created for entertainment purposes, many, such as the fake footage of an explosion near the Pentagon, have triggered chaos and fear among the public, and even caused the stock market to dip. As AI tools have become more publicly accessible, the technology has become exploited for criminal purposes, including identity theft, fraud, violations of data privacy and intellectual property rights, and threats to national security.  

The risk in South East Asia  

In South East Asia, there have been cases of tech-savvy criminals exploiting AI technology to impersonate public figures, spread disinformation, and defraud and extort people. By taking advantage of public trust they convince their victims that they need to urgently acquiesce to their demands on their way to defrauding them of money or damaging their reputation.      

The Asia-Pacific region saw a 1 530% increase in deepfake cases between 2022 and 2023, the second highest in the world after North America. Vietnam had the highest increase in deepfake fraud in the region (25.3%), followed by Japan (23.4%), and the Philippines saw the highest growth in deepfake cases (4 500%). AI and deepfakes are also being used in cyber-scam operations across the region, where thousands of people are reported to have been lured to work for organized criminal networks and forced to defraud other people through online scams. As these criminal networks grow, deepfake technology is becoming undoubtedly lucrative.  

Biometric identification is another threat, as a ‘deepfaked’ face could be used to fool device scanners, allowing criminal actors to gain access to victims’ personal and financial information. And AI-generated child sexual abuse material is another. There have been reported cases of sex offenders using deepfake technology to generate content involving minors. In September 2023, a South Korean court sentenced a man for using AI to generate realistic sexually abusive images of children, the first case of its kind in the country.   

Response  

Governments are grappling with how to mitigate the harms of AI. Deepfake technology itself is not considered illegal – and deepfakes are by no means all malicious – but depending on the kind of content generated, some violate laws such as data protection and specific offences of non-consensual content. Several governments have embarked on more concrete regulation, with the EU leading the way in standardizing how companies can use AI to ensure the safeguarding of health, safety, human rights, democracy and the rule of law. The US is preparing to draft legislation on AI by bringing together CEOs of big-tech firms for discussions. At the UN, member states are negotiating a draft convention on cybercrime, which is highly politicized and controversial due to its human rights implications. However, deepfake or generative AI technology is not specifically mentioned in the latest draft of the convention. 

Governments in the Asia-Pacific region are also preparing to regulate these technologies, but attempts remain disjointed, as countries have different priorities. The Chinese government, for example, has banned the creation of deepfakes without user consent and requires clear identification of AI-generated content. It also ordered face-swapping apps Zao and Avatarify to be removed from app stores in 2019 and 2021, respectively, for violating privacy and portrait rights. South Korea has criminalized the distribution of deepfakes that may ‘cause harm to public interest’. Australia aims to implement a number of guidelines, such as urging tech firms to label and watermark AI-generated content. And Thailand, the Philippines, Malaysia and Singapore have personal data protection laws to prevent exploitation. However, governments should aim for a more comprehensive and cohesive approach that includes a focus on prevention and awareness. 

Meanwhile, the sceptics should note that some global technology companies are doing their bit to prevent and combat the use of deepfakes. For example, Intel launched the first real-time deepfake detector, which analyzes videos to determine authenticity and whether the subject is human- or AI-generated. Other companies such as Microsoft, Optic, Sentinel, Reality Defender and Attestiv are working on similar media detection tools and platforms. However, private sector self-regulation has proven ineffective in many areas, particularly among social media platforms, where this type of content is circulated with varying and inconsistent rules and standards.  

Although deepfake technology has stirred up controversy and justifiable concern, it certainly has positive aspects, which is partly why it cannot be banned outright. In the healthcare sector, patient’s anonymity in medical research could be protected by creating virtual patient populations without the need to disclose their data. Deepfakes could also generate realistic data to explore new methods of diagnosis and disease monitoring. Other industries that could benefit are entertainment, educational media and digital communications, gaming, materials science and business. 

The challenge ahead lies in how to regulate the technology that is used to produce deepfakes, balancing commercial and technological interests, and right to privacy with freedom of expression. Collaboration between the public and private sectors will be crucial in formulating effective responses, as the cybercriminal community keeps one step ahead when it comes to technology. To begin with, increased public awareness of these forms of fraud and social engineering is fundamental to the protection of society from criminal exploitation and technological harm in the long term.