The BBC has issued a legal warning to U.S.-based artificial intelligence firm Perplexity, accusing the company of unlawfully reproducing its news content “verbatim” without permission. The broadcaster is demanding that the AI firm cease use of its material, delete any existing BBC content in its systems, and offer financial compensation for what it claims is unauthorized usage.
This marks the first time the BBC has taken legal steps against an artificial intelligence company over copyright concerns, amid growing tension between content creators and AI developers over web scraping and data usage.
The BBC said in a letter addressed to Perplexity CEO Aravind Srinivas that the company’s actions amounted to copyright infringement in the UK and constituted a breach of the BBC’s terms of use. “It is therefore highly damaging to the BBC, injuring the BBC’s reputation with audiences — including UK licence fee payers who fund the BBC — and undermining their trust in the BBC,” the letter stated.
The broadcaster also referred to a study it published earlier this year, which found that several AI chatbots — including Perplexity — often misrepresented or inaccurately summarised BBC news stories. The BBC said such flawed outputs failed to meet its editorial standards for impartiality and accuracy, further compounding reputational harm.
Perplexity has not publicly responded to the BBC’s claims. However, in a previous interview with Fast Company in June, Srinivas denied that the company’s bots ignored website instructions outlined in “robots.txt” files — a commonly used tool by publishers, including the BBC, to restrict automated scraping of their content.
Despite these denials, the BBC says Perplexity’s web crawlers are not complying with these restrictions. The BBC’s website disallows two of the company’s crawlers, but it alleges that the AI firm has continued to access and use its content regardless.
The case adds to a broader debate over how AI companies source training data and deliver real-time content. Tools like Perplexity’s chatbot, which it markets as an “answer engine,” scour the web for information and synthesize responses for users — often drawing on news sites and publishers without formal licensing agreements.
The situation highlights a growing tension between media organisations and AI firms over intellectual property rights. British media outlets and creative industries have recently called on the UK government to reinforce protections for copyrighted content in light of expanding AI capabilities.
In January, Apple was also forced to suspend an AI-generated headline feature on its iPhones after it incorrectly summarised BBC News articles, prompting a separate complaint from the broadcaster.
As AI technologies continue to evolve, the BBC’s legal challenge could serve as a key test case in defining the boundaries of fair use, copyright, and digital media rights in the age of generative AI.
