AI in Organized Crime: The Rise of Deepfakes

A new report published by Europol highlights the increasing threat coming from the use of AI in organized crime. Deepfake extortions, politically motivated cyberattacks and targeted data theft: cybercrime is evolving at a faster pace than Artificial Intelligence regulation. This article explores how organized crime is implementing Generative Artificial Intelligence to expand and professionalize illicit activities worldwide, and the massive human and economic costs that come with it.

Navigate

Imagine receiving a desperate call from a relative, saying that they’ve been kidnapped and begging to pay a ransom to save their lives. Minutes later, you find out that your relative is safe at home. Lookalikes are not real, but deepfake extortion is. And along with that, a new wave of digital threats is surging with the use of AI in organized crime.

According to a recently published study, Artificial Intelligence is one of the major accelerators of crime, with a wide spectrum of applications, ranging from impersonation to targeted cyberattacks.

When I started studying transnational organized crime in 2022, I quickly realized that a lot of the vocabulary used to describe its dynamics sounded very familiar to me. In fact, it mirrored the language I encountered in my previous background in management engineering: supply, demand, profit, and diversification. 

Criminals, much like corporations, continuously evolve to seize new opportunities and maximize revenues.

Against this background, it is no wonder that with the advent of AI, criminal organizations around the world quickly adapted and implemented the technology in their operations. The picture that emerges from the 2025 European Union Serious and Organised Crime Threat Assessment (EU-SOCTA)1, a study issued every 4 years by Europol, is worrisome on several aspects. 

AI is the perfect companion of cybercrime

Europol’s analysis reveals how the accessibility of Artificial Intelligence, in particular Generative AI, has lowered the entry barriers for digital crime: today, these tools can be used for harmful purposes with little technical expertise. In addition, an emerging criminal business model is Cybercrime-as-a-service (CaaS), in which more sophisticated applications are sold to criminal organizations. This way, organized crime groups that want to profit from cybercrime can easily purchase ready-to-use malware, instead of investing massive resources to build their own infrastructure.

One threat in particular is highlighted as the most pressing one: social engineering attacks. Generative AI in organized crime can be used to manufacture deepfake videos, clone voices, and replicate writing styles, making online scams and data theft schemes perfectly crafted for each target. 

A study published in 20202, when Large Language Models (LLMs) were not yet largely available to the public, had already identified audio/video impersonation as the most concerning application of AI in digital crimes. In particular, it was ranked as the most harmful and scalable threat, potentially impacting a larger portion of society, and the most difficult to counter. 

The events of recent years have proven the study right: AI-generated deepfakes have become a worldwide reality.

Digital crime knows no borders

Criminal groups in Latin America didn’t lose any time to keep up with technology. Peru has witnessed a surge in deepfake extortions3. AI has been used to reproduce the targets’ voices, which are extracted from WhatsApp and social media, and then sent to relatives asking for ransom, simulating a kidnapping that never happened.

Similar cases have happened in Mexico4, where criminal organizations that operate around the US border are resorting to deepfakes to extort the families of missing migrants. Initially posing as organizations that search for people who have disappeared, they are able to obtain photos from families who are seeking help. These photos are then used to generate AI-generated images and videos that simulate kidnappings, defrauding the same targets who were looking for their relatives.

In Brazil5, stealing data for ransom has become the main focus of cybercrime groups. According to the country’s cybersecurity agency, data theft constitutes the majority of cybercrime incidents6, with almost 5000 cases recorded in the first eight months of 2025.

But these trends are not limited to the American continent: in South Africa7, impersonation cases have been on the rise since 2021.

Asia has a long history of criminal groups specializing in online scams. Chinese crime syndicates have established entire scam-cities in Myanmar, Laos, and Cambodia, where specialized workers from South Asia, Africa, and Southeast Asia are attracted with fake job postings and forced to conduct online fraud, usually targeting victims in Western countries.

Deepfake scams have massive human and economic costs

It is believed that only in Myanmar, more than 100.000 people8 are being held captive and forced to work in scam cities. Only in Southeast Asia, more than 43 billion USD9 is lost every year to online scams, and the use of AI to write convincing scripts and create fake personas10 is making this business more and more profitable.

The targeting and automation favored by AI make this type of attack extremely efficient. If previously, the most vulnerable targets were users with limited digital literacy, the evolution of technology is now scaling up at a faster rate. In February 2024, the Hong Kong office of a multinational company fell for a scam11 in which the CEO was impersonated with a deepfake and lost 25.6 million USD.

But deepfakes can also be used for politically driven cyberattacks12, spreading misinformation, or stealing sensitive government data.

An urgent need for AI regulation

The magnitude of this issue raises a crucial question: how can a criminal business that operates across borders and moves in the digital world be contrasted? 

An approach that involves multilateral cooperation is already becoming the global standard for contrasting more traditional forms of transnational organized crime. The most important example is the United Nations Convention against Transnational Organized Crime (UNTOC)13, which was adopted in 2000.

In the context of AI, the main area of focus must be the private sector, though, considering that deepfake software and Generative Artificial Intelligence are generally developed legally by companies. In this field, technology is moving faster than legislation, allowing firms to operate with limited accountability. 

If free market advocates have long believed that opportunities for innovation and technological advancements thrive in an environment of minimal state regulations, especially with major tech companies, we’ve already seen the nefarious effects of the lack of oversight.

Governments around the world are now moving towards stricter regulation: at the beginning of 2023, China introduced its new legislation on deepfakes14, prohibiting the production of content without user consent.

In 2024, it was the turn of the European Union with the launch of the AI Act15, a legal framework that aims to harmonise rules on AI development and use in the EU.

Even more ambitious is the United Nations Convention against Cybercrime16, which is planned to be signed at the end of October 2025, and is intended to facilitate global cooperation in combating cybercrime. This treaty hasn’t been spared from criticism and concerns17 regarding the risk that it will facilitate human rights violations and data collection by governments, harming the privacy of citizens.

But while governments and international institutions (rightfully) discuss the risks of abuse and misuse of regulation, criminal syndicates continue to extract massive profits from cybercrime, operating above ethics and law. In the end… business is business, right?

References

  1. Europol. (2025). The changing DNA of serious and organised crime: EU Serious and Organised Crime Threat Assessment 2025. https://www.europol.europa.eu/cms/sites/default/files/documents/EU-SOCTA-2025.pdf
  2. Caldwell, M., Andrews, J. T. A., Tanay, T., & Griffin, L. D. (2020). AI-enabled future crime. Crime Science, 9(1), 14. https://doi.org/10.1186/s40163-020-00123-8
  3. Ramírez Mendoza, S. (2023, July 16). Clonación de voz para estafar con inteligencia artificial: ¿cómo funciona esta modalidad y qué recomendaciones seguir? El Comercio. https://elcomercio.pe/lima/clonacion-de-voz-para-estafar-con-inteligencia-artificial-como-funciona-esta-modalidad-y-que-recomendaciones-seguir-inseguridad-deepfake-ciberdelincuencia-hackers-secuestros-noticia/?ref=ecr
  4.  Noticias Telemundo. (2023, July 16). Clonación de voz para estafar con inteligencia artificial [Video]. YouTube. https://www.youtube.com/watch?v=7e7ML7r09uA
  5. InSight Crime. (2023, July 16). Kidnapping data for ransom is a booming business in Brazil. InSight Crime. https://insightcrime.org/news/kidnapping-data-for-ransom-is-a-booming-business-in-brazil/
  6.  Centro de Tratamento e Resposta a Incidentes Cibernéticos do Governo (CTIR Gov). (2025, 1º de setembro). Visão geral – CTIR Gov Em Números. https://www.gov.br/ctir/pt-br/assuntos/ctir-gov-em-numeros/visao-geral-capa
  7.  Sigsworth, R. (2023, June 29). AI and organised crime in Africa. ENACT Africa. https://enactafrica.org/enact-observer/ai-and-organised-crime-in-africa
  8. Ratcliffe, R. (2025, September 8). Revealed: the huge growth of Myanmar scam centres that may hold 100,000 trafficked people. The Guardian. https://www.theguardian.com/global-development/2025/sep/08/myanmar-military-junta-scam-centres-trafficking-crime-syndicates-kk-park
  9. United States Institute of Peace. (2024, May). Transnational crime in Southeast Asia: A growing threat to global peace and security. https://www.usip.org/sites/default/files/2024-05/ssg_transnational-crime-southeast-asia.pdf
  10.  CNN. (2025, April 2). Myanmar scam center crackdown: Thousands of victims freed, but many remain stuck. https://www.cnn.com/2025/04/02/asia/myanmar-scam-center-crackdown-intl-hnk-dst
  11.  Kong, H. (2024, February 4). ‘Everyone looked real’: Multinational firm’s Hong Kong office loses HK$200 million after scammers stage deepfake video meeting. South China Morning Post. https://www.scmp.com/news/hong-kong/law-and-crime/article/3250851/everyone-looked-real-multinational-firms-hong-kong-office-loses-hk200-million-after-scammers-stage
  12. Author(s). (2023). Artificial intelligence and political deepfakes: Shaping citizen perceptions in the digital age. Journal of Asian Security and International Affairs. https://doi.org/10.1177/09732586241277335
  13. United Nations Office on Drugs and Crime. (n.d.). United Nations Convention against Transnational Organized Crime (UNTOC). https://www.unodc.org/unodc/en/organized-crime/intro/UNTOC.html
  14.  Hemrajani, A. (2023, March 8). China’s new legislation on deepfakes: Should the rest of Asia follow suit? The Diplomat. https://thediplomat.com/2023/03/chinas-new-legislation-on-deepfakes-should-the-rest-of-asia-follow-suit/
  15.  European Commission. (n.d.). AI Act: Regulatory framework for artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  16. United Nations Office on Drugs and Crime. (n.d.). United Nations Convention against Transnational Organized Crime (UNTOC). https://www.unodc.org/unodc/cybercrime/convention/home.html
  17. Brown, D. (2024, December 30). New UN cybercrime treaty primed for abuse. Human Rights Watch. https://www.hrw.org/news/2024/12/30/new-un-cybercrime-treaty-primed-abuse

Join the Discourse

Your Opinion matters.

Share Your thoughts in the comments!

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Uncover More Insights on Digital Peace

Want a digital world worth living in?

Get monthly insights on technology, peace, global security, and the future of humanity. 

I agree to receive monthly newsletters and accept data processing as outlined in the data protection policy.