As the world watches the tense negotiations surrounding the ongoing Russia-Ukraine conflict, a disturbing audio has gone viral. The fabricated audio features Donald Trump Jr. supporting Russia’s actions over Ukraine. In response, there’s been outrage and concerns over misinformation. This is a threat, especially in delicate affairs like politics.
What the Fake Audio Claims
In the recording, a voice resembling that of Donald Trump Jr. is heard making controversial remarks. The audio clip includes statements such as, “I can’t imagine anyone in their right mind picking Ukraine as an ally when Russia is the other option” and “Honestly, the U.S. should have been sending weapons to Russia.”
These claims, which endorse Russia’s position, quickly spread across social media platforms. And it has drawn the attention of everyone: political groups and the public. The audio was posted alongside a video showing a Spotify page supposedly promoting Trump Jr.’s podcast.
It was shared over 6,600 times on X (formerly known as Twitter). It also garnered more than 1.8 million views. When traced, the post was first shared by pro-Russian accounts on Telegram, a messaging platform popular among political extremists.
The Threat of AI-Generated Misinformation
Experts in synthetic media and artificial intelligence are warning about the increasing prevalence of AI-generated content in politics. Hany Farid, a renowned expert on deepfakes and the Chief Science Officer at GetReal Labs, was quick to point out that the audio was almost certainly generated by AI.
His team used sophisticated models designed to detect such fakes. And after running the audio by it, they confirmed it as an artificially created piece of media.
“We’re seeing more and more examples of these types of deepfakes, and it’s only going to become harder for the average person to tell what’s real and what’s fake,” Farid said. “The danger lies in how easily these fake pieces of media can be spread, especially when they’re politically charged.”
The Spread of Fake Content
The audio was shared on multiple platforms. Most concerning, the fake audio was reposted by FactPostNews, an official rapid-response account linked to the Democratic Party. Once the error was discovered, the post was swiftly removed. A spokesperson for the Democratic National Committee (DNC) then came out to express regret over the mistake.
“It was a clear violation of our policy against spreading misinformation,” the spokesperson said. “As soon as we were made aware of the inauthentic nature of the audio, we took down the post and launched an internal review.”
Despite the removal, the damage had already been done. The fake audio had already been spread to thousands of users, many of whom believed it to be genuine.
Trump Jr. Denounces the Fake Audio
Donald Trump Jr. himself has publicly denounced the audio, calling it a fabrication. A spokesperson for the Trump family released a disclaimer. The statement claimed that the recording was entirely AI-generated and had no connection to Trump Jr. whatsoever.
“The audio in question, which was amplified by the official X account of the DNC, is 100% fake,” the spokesperson said. “It’s an AI-generated deepfake, and we strongly condemn its use for spreading misinformation.”
Further investigation revealed that the Spotify page shown in the post did not contain the episode in question. A spokesperson for Spotify confirmed that the episode was not available on their platform at the time the video was posted.
However, the episode could be found on Rumble, another platform that hosts Trump Jr.’s podcast. Yet, this episode did not feature the controversial audio. This further solidifies the claim that the content was fake.
The Impact of AI-Generated Content in Politics
This incident is far from isolated. Political figures, including President Trump, have faced backlash in the past for sharing AI-generated content. Just days before the controversy surrounding Trump Jr.’s fake audio, President Trump posted an AI-generated video about the war in Gaza.
The video depicted fake beach scenes in Gaza complete with a made-up Trump Tower. It wasn’t too long before it was criticized for its misleading portrayal of events. “I don’t know anything about these images or videos except that someone else created them,” Trump said in a later interview.
“AI is always very dangerous in that way, and we need to be careful about how it’s used.”
The spread of deepfakes and other AI-generated content is raising important questions about the future of media and political communication. As technology advances, it becomes increasingly difficult to distinguish between what’s real and what’s fake, leaving the public vulnerable to manipulation.