The BBC has officially fired a warning shot at Perplexity AI.
The UK’s national broadcaster sent a legal letter to Perplexity CEO Aravind Srinivas, accusing the startup of using its content to train AI models without permission.
According to the BBC, its news material was used without a licensing agreement, and now the organization wants either financial compensation or for the data to be removed entirely.
This move marks the BBC’s first major legal step in the growing global fight over how AI companies source their training data.

What’s the Big Issue Here?
Many AI tools rely on large amounts of online data to train their models, including news content.
But not all of that content is used with permission.
- BBC alleges its content was scraped and reused word-for-word
- Perplexity says the claims are “manipulative and opportunistic”
- BBC demands compensation or deletion of its content
- An injunction is on the table if demands aren’t met
BBC argues that Perplexity’s AI system reproduces content in ways that directly compete with the broadcaster’s own news services.
In short, it’s like someone copying your homework, and then using it to win awards.
Tension Grows Over Copyright in AI
This is not an isolated incident.
More news outlets are drawing the line when it comes to AI companies using their work without a deal in place.
Back in October, Dow Jones (owner of The Wall Street Journal) sued Perplexity for similar reasons, calling it a “massive amount of illegal copying.”
And the concern isn’t just about news organizations losing traffic, it’s about losing control and value.
If AI tools surface summaries or answers without linking back to the original publisher, what happens to the journalism industry?
Industry Wants Opt-In Rules, Not Free Access
Media leaders like BBC director general Tim Davie are calling for an opt-in system, where AI firms must ask before using content.
Davie warned that without stronger protections for intellectual property (IP), the UK’s creative industry could face serious damage.
“The value is in the IP,” he said. “We need to protect it.”
Other publishers have already taken action:
Publisher | AI Partner / Legal Action |
---|---|
Financial Times | Licensing deal with OpenAI |
News Corp (WSJ) | Sued Perplexity |
Reuters | Deal with Meta |
Daily Mail Group | Deal with ProRata.ai |
Perplexity’s Side of the Story
Perplexity says it doesn’t train its own large models like OpenAI or Google. Instead, it builds tools that let users query existing models.
It argues that this makes it fundamentally different and that the BBC misunderstands how the tech works.
Still, the BBC claims it has clear proof that its copyrighted material appears verbatim in Perplexity’s outputs.
What’s Happening in the UK?
Things are also heating up at the policy level. Initial proposals in the UK hinted that tech companies might be allowed to scrape content unless publishers opt out.
That sparked backlash from the creative sector.
Culture Secretary Lisa Nandy has since walked that back, promising that any new rules won’t harm creators. She emphasized that, “People must be paid for their work.”
So while the legal landscape remains uncertain, one thing is clear: the pressure on AI firms to play fair is growing.
The Bigger Picture
We’re seeing a major shift in how content rights are handled in the age of generative AI.
As AI tools like Perplexity, ChatGPT, and others reshape how we access information, traditional publishers are demanding a seat at the table and their share of the value.
This moment might be the beginning of a new playbook for content licensing in AI. One that doesn’t just rely on what’s technically possible, but what’s ethically fair.