
Google has confirmed that it uses a portion of YouTube’s massive video catalog to train its AI models. Including Gemini and the new Veo 3. The company told CNBC that while not all videos are used. The practice aligns with YouTube’s long-standing strategy to improve its services using platform content. A YouTube spokesperson stated that they’ve always used user uploads to enhance products and continue to implement safeguards to protect creator likeness. However, this revelation has sparked fresh concern among creators and copyright advocates, many of whom were unaware that their videos might be feeding advanced generative AI systems.
Scale of Training and Creators’ Growing Alarm
YouTube has a huge collection of videos that provides an unrivaled set of data to train generative AI. Analysts assume that with only 1 percent of 20 billion YouTube videos. This would translate to more than 2.3 billion minutes, which is unprecedented, especially considering most training datasets. Although YouTube uses the argument of transparency. A good number of creators and media houses were not aware that their content was being utilized in such a manner. CNBC spoke to a number of individuals who were shocked at the disclosure, particularly considering that Veo 3 had the capacity to produce high-quality video content. Others are afraid that such tools might end up undermining their jobs.
Luke Arrigoni, CEO of digital identity firm Loti, noted that creators may be unintentionally helping train systems that compete with them directly. This concern grows amid broader anxieties about generative AI replacing original voices and eroding the value of human creativity. Critics argue that using this content without consent, credit, or compensation, despite being technically covered under YouTube’s terms of service, amounts to exploitation. They warn of a coming wave of disputes if platforms and AI developers continue to prioritize scale over fairness. YouTube, meanwhile, has pointed to initiatives with agencies like Creative Artists Agency. To manage AI-generated likenesses for select top talent, though many everyday creators remain excluded.
Legal Grey Zones and Licensing Loopholes
The practice of training AI on creator content without explicit permission sits in a murky legal and ethical space. YouTube’s terms of service allow for broad reuse. By uploading, users grant the platform a worldwide, royalty-free license to use their content. But critics argue that creators did not foresee their videos being used to teach AI models. That may one day replace them in commercial settings. The fear isn’t just about replication; it’s about competition. AI-generated videos could reduce demand for human-made content, and there’s currently no mechanism for creators to be compensated for their indirect contributions.
Dan Neely, CEO of Vermillio, which helps manage digital rights and likenesses, emphasized that most users simply aren’t aware of what they’ve agreed to. YouTube has partnered with entities like Creative Artists Agency to help high-profile creators identify and manage misuse. These programs don’t scale to the broader creator ecosystem. With Veo 3 generating realistic, stylistically varied videos, the stakes are higher than ever. Creators worry that their unique style, voice, and techniques are being absorbed into models that will output polished content faster and cheaper, without attribution or licensing. As generative AI matures, expect growing calls for regulation, transparency, and fair compensation across the creator economy.
The Road Ahead, Consent and Compensation in Question
A major concern is the issue of consent, ownership, and fairness. About the use of YouTube content to train AIs such as Gemini and Veo 3. Yet helmed by the law with its terms of service, YouTube finds itself in an increasingly heated outlook with creators. Who found themselves blindsided by YouTube’s AI ambitions. The boundary between innovation and exploitation is increasingly thinning as AI-generated media is being increasingly sophisticated and prolific. As long as platforms such as YouTube do not introduce a more transparent opt-out mechanism or similar measures to address the dispute. The buildup is likely to continue, possibly escalating in terms of lawsuits.