
Deezer is rolling out on-screen labels for albums containing AI-generated songs in a bid to tackle rising music fraud. The Paris-based streaming service reports that 18% of daily uploads, around 20,000 tracks, are now entirely AI-created, up from 10% just three months ago. CEO Alexis Lanternier says the platform must “safeguard artist rights” amid growing copyright concerns linked to AI training practices. The new labels warn listeners that AI tools generated the content, a move reflecting broader industry fears over the legal and creative implications of artificial intelligence in music production and royalty systems.
Detecting AI Songs, Deezer Uses AI Against AI
To flag AI-generated songs, Deezer uses the same type of generative AI tools that fraudsters employ. CEO Lanternier explains that these generators leave behind subtle but consistent signal patterns that distinguish them from human-made tracks. Deezer trains detection algorithms by feeding them songs made by leading AI tools, allowing the system to identify telltale features. This “fighting AI with AI” approach requires constant updates, as models evolve and new techniques emerge.
According to Lanternier, AI-generated content is often designed to game the system. Fraudsters create massive volumes of music and stream it via bots or “streaming farms,” inflating play counts to collect royalties. Deezer estimates that 70% of plays for AI songs are from bots, not real listeners. To combat this, Deezer plans to cut off royalty payments to any tracks flagged as part of stream manipulation.
While only 0.5% of total streams on Deezer are currently AI-generated, the trend is accelerating. The platform’s new tagging and fraud detection strategy aims to stay ahead of the curve, ensuring AI tools don’t erode trust or revenues in the music ecosystem. The initiative underscores how AI is becoming both a weapon and a shield in the evolving music tech arms race.
AI’s Expanding Role and Legal Tensions in Music
The surge in generative AI use has triggered legal and ethical debates across the music industry. Tools like Suno and Udio are at the center of lawsuits from major record labels and royalty collection groups. These platforms are accused of producing music that mimics copyrighted works without authorization, leading to allegations of widespread intellectual property theft. In Germany, rights organization Gema is suing Suno for generating tracks nearly identical to “Mambo No. 5” and “Forever Young.” Meanwhile, reports suggest that labels are negotiating potential compensation deals with AI music startups to prevent further legal fallout.
Deezer’s CEO argues that using AI in music isn’t inherently bad, if human artists are involved. But when anonymous creators or bots flood platforms with mass-produced, pattern-based songs just to collect royalties, the result is harmful distortion of the streaming economy. Fraud, he says, is becoming a business model. To stay proactive, Deezer is continuously training its detection tools using AI-generated content as fuel. By identifying hidden structures in audio, the system learns how to separate human creativity from algorithmic mimicry. This evolving “AI-forensics” battle shows how deeply integrated artificial intelligence has become, not just as a creative force, but as a regulatory challenge in modern music.
The Stakes of Streaming Fraud in the AI Era
AI-generated music is more than a novelty, it’s now a tool for financial manipulation. Deezer’s crackdown reveals how synthetic songs, amplified by bots, can exploit royalty systems at scale. One U.S. case saw a fraudster earn $10 million by auto-streaming AI tracks billions of times. As generative tools proliferate, platforms like Deezer must act fast to protect integrity, revenue, and artist livelihoods. With AI now weaponized in streaming fraud, the company’s approach, detecting AI with AI, may be the blueprint for digital rights enforcement in the next phase of the music industry’s evolution.