How Cybercriminals Weaponize AI to Launch Convincing Deepfake Phishing Attacks

Updated:January 19, 2026

Reading Time: 3 minutes
A robot and a woman having a conversation

Artificial intelligence is quietly becoming one of the most powerful tools for cybercriminals. In late 2025, researchers uncovered a phishing campaign attributed to the North Korean threat group Kimsuky, who used ChatGPT to generate realistic military ID cards embedded directly into phishing emails, allowing attackers to convincingly impersonate government entities. 

With AI tools getting cheaper, better, and more accessible, deepfake attacks of this kind are expected to grow massively. The trend has already started, as the first quarter of 2025 saw more deepfake phishing and fraud incidents than the entirety of 2024.

So how exactly do criminals build these highly convincing deepfakes, and why is it more effective for them than traditional phishing? Let’s find out.

How Criminals Build a Realistic Deepfake in Minutes

Building a hyper-realistic deepfake is easier than ever. If you open ChatGPT or Gemini today and ask it to generate a fictional ID card, it can produce an image that looks identical to a real photograph. That’s exactly what criminals are doing, but it’s just the tip of the iceberg.

The potential and Perils of AI and Deepfakes

The most advanced deepfake impersonations rely on audio and video cloning. Thanks to the abundance of material all over social media from executives, government officials, and government figures, criminals have all they need to create the perfect deepfake. 

With just 30 to 60 seconds of clean audio, modern voice-cloning models can generate a realistic replica of someone’s voice. Video deepfakes take it a step further. While they require a bit more training material, they allow attackers to completely recreate a person’s image, nailing it all the way down to facial expressions and lip mannerisms.

The output can be distributed as a pre-recorded message or even used during a live video or voice call, where real-time interaction makes the impersonation far more convincing and difficult to challenge.

What took criminals months and thousands of dollars to build can all be done with a $20 subscription to one of the mainstream AI models. All they need to do is feed the model the material and some prompts, and they get the ultimate phishing weapon. There are even dedicated phishing-as-a-service kits that can handle the entire campaign on behalf of malicious actors on a fully automated basis.

The Types of Deepfake Phishing Attacks Organizations Face

Attackers weaponize deepfakes in several ways. Typically, it’s a direct urgent request coming from a high authority figure. Common requests include approving a payment, sharing credentials, or bypassing a certain process to the benefit of the criminals. 

This is nothing new, but when an employee receives a voice note from what sounds like their boss, they are way more likely to act on it compared to just receiving a text message.

More calculated deepfake attacks may involve synthetic personas, which are entirely AI-generated identities with a complete digital presence, including email accounts, profile photos, voice, video, and a believable online history. These are particularly popular on Linkedin, where attackers slowly build trust over weeks or months before initiating a scam.

Multi-Channel Delivery Methods

Perhaps what’s more important than building a deepfake is how to deliver it to victims. Attackers are increasingly leveraging multiple communication channels to reach targets in environments where such requests feel normal, trusted, and expected. A scam may start off as an email and progress into a live Zoom video call where the deepfake can push the victim into action.

Criminals typically look to deliver their scams on platforms where targets feel comfortable and may even expect such communication. That’s why email and Linkedin are favored for the initial lure, while other popular communication tools may come into play later as well.

Artificial intelligence comes into play throughout the entire scam. It doesn’t just create the deepfakes, but also assists with other parts of the communication, including writing more convincing emails and simple messages that match the tone of the specific role or authority the attacker is impersonating.

Why Detection Tech Alone Doesn’t Save You

What makes deepfake attacks so dangerous is that they exploit the weakest link in any security program: human trust and decision-making. There is no technical safeguard for bad human choices. The only solution is to build awareness through ongoing phishing simulation training that touches on the exact deepfake and other scams employees may encounter.

Detection may help in the later stages of an attack, such as alerting about credential abuse. In other cases, like wire fraud, there may be no technical signal at all until the funds have already left the organization.

So, to effectively defend against modern deepfake attacks, prevention at the human decision point is critical.

Final Thoughts

Deepfake phishing attacks are the latest evolution of a threat that has challenged defenders for decades. By making scams more realistic and harder to detect, they place even greater pressure on what has traditionally been the weakest link in cybersecurity. With the right training and awareness, however, people can become the strongest link and a foundation of a resilient security culture across the organization.


Tags:

Joey Mazars

Contributor & AI Expert