
The BBC has issued a legal warning to AI startup Perplexity, accusing it of scraping BBC content to train its default AI model. In a letter to CEO Aravind Srinivas, the BBC demanded an immediate halt to the practice, deletion of stored content, and a financial compensation proposal. This marks the broadcaster’s first move against AI firms over content use, as it seeks to protect its IP amid funding pressures. Perplexity, backed by Jeff Bezos, dismissed the accusations as “manipulative and opportunistic,” arguing the BBC misunderstands AI and is attempting to preserve Google’s dominance in digital content delivery.
Content Use, Licensing Talks, and Legal Concerns
The BBC claims that Perplexity’s platform has reproduced its articles verbatim and included newly published content in search results, bypassing direct access to BBC services. This, the broadcaster argues, creates competition while damaging its credibility and undercutting user trust, especially given the BBC’s reputation for impartial journalism. In December, the BBC’s internal review showed that 17% of Perplexity’s responses using BBC material included factual errors, poor sourcing, or lacked context.
With charter renewal talks looming, the BBC is exploring potential licensing deals with Big Tech firms like Amazon to monetize its vast content library. Such agreements could generate much-needed revenue as it faces budget constraints. According to the Financial Times, BBC executives are increasingly alarmed by how their publicly funded material is being repurposed by AI companies without permission or compensation.
Perplexity counters that it does not train foundational models but refines Meta’s Llama-based system. While it has entered revenue-sharing agreements with Time, Fortune, and Der Spiegel, it is currently facing lawsuits from News Corp and has received legal notices from outlets like the New York Times. The BBC, however, has now begun registering its content with the US Copyright Office, strengthening its position for seeking statutory damages.
Broader Implications for AI, Media, and IP Law
This dispute underscores rising tensions between media outlets and AI developers as generative tools grow in popularity. Perplexity’s growing user base, over 30 million people, primarily in the US, reflects the consumer appetite for fast, AI-powered news summarization. But that utility may come at a steep cost to original content producers. For organizations like the BBC, which rely on public trust and regulatory backing, misuse of their journalism by AI systems risks undermining both brand integrity and future revenue streams.
Perplexity’s stance highlights a larger battle over fair use, IP rights, and the role of public information in AI training. The company argues that the BBC’s actions are aimed at propping up outdated distribution models and that the broadcaster is misapplying intellectual property law. However, critics argue that AI systems offering summarization without proper attribution or licensing create a disincentive for quality journalism, especially when public trust and accuracy are at stake.
As litigation, such as in the case of The New York Times against OpenAI and Microsoft, increases, this conflict can compel regulators to set up more defined limitations on AI content use. In the meantime, even publicly funded outfits such as the BBC seem ready to take to countering unlicensed scraping, meaning that the latter will not continue to escape scrutiny.
What Comes Next for BBC vs. Perplexity?
The BBC’s warning to Perplexity could mark the beginning of a wave of enforcement actions from media outlets seeking compensation from AI firms. While licensing and partnership agreements are on the cards of some companies, others, such as the BBC, are turning to the courts to safeguard their intellectual property. With Perplexity set to reach a valuation of over $14 billion, much is at stake, not only for the startup in question but also for the ecosystem of similar content platforms fueled by AI. As pressure continues to build upon regulators to act, this case could form part of the new script in the struggle to maintain a balance between AI innovation, media sustainability, and ethical use of content.