YouTube and CAA are partnering in the ongoing war to protect artists against the threat of AI. In a first-of-its-kind deal, talent under CAA will be able to identify and manage AI-generated content featuring their faces on YouTube at scale.
This will be possible thanks to YouTube’s early-stage likeness management technology. The partnership will also help grow and train the tech as feedback from artists will allow YouTube to build its detection systems and refine its controls. The ultimate goal will be to make this technology widely available, thereby giving artists more awareness and control over how they’re being depicted on YouTube.
Testing will begin early next year and will include CAA clients from award-winning actors to top athletes in the NBA and NFL. As well as highlighting AI-generated content using their likeness, this tool will give users easy access to removal requests through YouTube’s privacy complaint process.
“At YouTube, we believe that a responsible approach to AI starts with strong partnerships. We’re excited to collaborate with CAA, an organization that shares our commitment to empowering artists and creators,” Neal Mohan, CEO of YouTube, said in a Tuesday statement. “In the days ahead, we’ll work with CAA to ensure artists and creators experience the incredible potential of AI while also maintaining creative control over their likeness. This partnership marks a significant step toward building that future.”
“Neal Mohan and I started speaking earlier this year about the importance of creating a responsible AI ecosystem that protects artists and their IP rights, while unlocking new possibilities for creative expression,” Bryan Lourd, CEO and co-chairman of CAA, added. “At CAA, our AI conversations are centered around ethics and talent rights, and we applaud YouTube’s leadership for creating this talent-friendly solution, which fundamentally aligns with our goals. We are proud to partner with YouTube as it takes this significant step in empowering talent with greater control over their digital likeness and how and where it is used.”
This continues YouTube’s push to add more tools and structures to protect artists and creators from AI. In September, the company announced that it was developing a synthetic-singing identification technology through Content ID that will allow partners to automatically detect and manage artificially generated vocal content. This was developed after an AI-generated song using Drake and The Weeknd’s likenesses gained a great deal of attention in 2023.
“Importantly, this is the first step of a larger testing effort. Over the next few months, we’ll announce new testing cohorts of top YouTube creators, creative professionals and other leading partners representing talent,” YouTube wrote in a blog post at the time teasing this newer development.
CAA also has a history of advocating for artist rights when it comes to the thorny issue of artificial intelligence. Last year, the top talent agency launched the CAA Vault, a talent-focused service that scans, captures and securely stores clients’ digital likenesses — including their faces, bodies and voices.
“[The Vault] focused on enabling our artists, our talent, to capture their digital likeness and their voice and be able to own that so our clients, who have gone through the vault, own their authorized, authenticated version of themselves,” CAA head of Strategic Development Alexandra Shannon said at TheGrill 2024. “They’re in control of it. We’ve created permissions around who can use it and how.”
Additionally, CAA has formed partnerships with technology companies such as Clear Angle Studios, Metaphysic, Deep Voodoo and Veritone in order to better protect artists. On the governmental side, CAA also works with policymakers, lawyers and others key stakeholders to help shape AI policies. Recently, CAA publically supported the NO FAKES Act, a bipartisan proposal that protects the vocal and visual likeness of all individuals from unauthorized AI recreations.