Birdella INSIGHTS
The Law Wasn’t
Built For This
![]()
21st April, 2026
YouTube made a significant announcement this week: its proprietary deepfake detection tool is now open to any public figure at risk of having their likeness misused. Actors, athletes, musicians, politicians, and creators can opt in — whether or not they have a YouTube channel — and use the system to identify and request removal of AI-generated replicas of themselves on the platform.
The tool has been in development for over three years and was quietly piloted with CAA before being expanded to politicians and, now, the wider public. It works on the same conceptual model as Content ID — YouTube’s long-established copyright management infrastructure — but instead of scanning for copyrighted creative works, it scans for a person’s face, voice, and likeness.
That distinction is more important than it might first appear.
The IP Framework Wasn’t Designed for This
Intellectual property law has always been organised around creative output. Copyright protects the film, the song, the photograph. Trademark protects the brand identity. Patent protects the invention. These categories developed over decades, through case law and legislation, in response to specific technological and commercial contexts.
A person’s likeness — their face, their voice, their characteristic mannerisms and appearance — doesn’t fit cleanly into any of those categories. It falls instead into the narrower, patchwork territory of personality rights and right of publicity: laws that vary enormously by jurisdiction, have limited international harmonisation, and were never designed to address the kind of scale that generative AI now makes possible.
The past six months have illustrated this gap starkly. When OpenAI’s Sora platform was flooded with AI-generated videos featuring recognisable actors and public figures, the legal response was muddied. When AI platform Seedance 2.0 produced viral videos showing Brad Pitt and Tom Cruise fighting — spreading globally within a day — MPA president Charles Rivkin called it “unauthorized use of U.S. copyrighted works on a massive scale.” But it wasn’t, in any conventional sense, a copyright infringement. No copyrighted work was reproduced. What was reproduced was a likeness.
Likeness as a Managed Asset
What YouTube is building — and what makes this announcement genuinely significant for anyone thinking about AI and IP strategy — is not merely a protection mechanism. It’s infrastructure for treating likeness as a form of property.
The comparison to Content ID is deliberate. Content ID doesn’t only enable removal of infringing content; it enables monetisation. Rights holders can choose to share revenue with uploaders rather than take down their content. YouTube’s chief business officer Mary Ellen Coe has confirmed that likeness monetisation is in the longer-term thinking for the new tool.
The talent industry is already thinking ahead of the platform. CAA has built what it calls the “CAA Vault” — a repository of client likenesses held for future monetisation opportunities. That’s a fundamentally different framing from protection. It’s asset management. It treats a person’s face and persona the way a publisher treats a catalogue of copyrighted works: something to be licensed, valued, and commercially leveraged.
Jason Newman, a partner at talent management firm Untitled Entertainment, put it plainly: “Their real estate is their face. Their real estate is their body. Their real estate is who they are, what they do, how they say it.”
That sentence encapsulates a shift that is happening in practice before it has happened in law.
The Broader Stakes
The entertainment industry provides the visible cases because its members are famous and the harms are dramatic. But the underlying dynamic is not confined to Hollywood.
Any professional whose public presence, personal brand, or visual identity carries commercial value is exposed to the same structural gap. As video generation models improve and the cost of producing credible deepfakes continues to fall, the ability to fabricate a realistic likeness of any person with a meaningful digital footprint becomes increasingly accessible.
Legislative responses are developing, but unevenly. The EU AI Act includes provisions relating to synthetic media labelling. Several US states have right of publicity laws with meaningful teeth. The UK is beginning to engage with personality rights questions in AI contexts. But the pace of policy development has consistently lagged the pace of technological capability.
In that gap, platforms are establishing norms. The systems YouTube is building now — detection, flagging, removal, and eventually monetisation — may do more to shape the eventual legal framework than the legislative process itself. That is how it worked in the streaming and social media eras: the deals and technical systems that platforms created in the absence of clear law ended up informing what the law became.
What Organisations Should Be Asking
For organisations navigating the current AI landscape, YouTube’s announcement is a prompt to think more carefully about likeness as an IP category — both in terms of risk and in terms of opportunity.
The questions worth sitting with are these: Do your contracts and talent agreements reflect the possibility that a person’s likeness may be reproduced, manipulated, or commercially exploited by AI systems? Do your IP policies address synthetic media? If you work with public figures, creators, or anyone whose identity has commercial value, do you have visibility into how their likeness is being used — and by whom?
YouTube’s Coe describes the tool as “fire insurance.” It’s a useful metaphor. But as the infrastructure for treating likeness as a managed IP asset continues to develop, the more strategic question is not just how to protect against harm — it’s how to position your organisation as the legal and commercial frameworks catch up with the technology.
That transition is underway. The organisations that engage with it now, rather than waiting for legislative clarity, will be better placed to shape what it looks like.
The Birdella Group provides Strategic Thought Leadership & Tech-Led Intelligence in AI & IP.
Our Thelonious platform delivers AI-powered, human-reviewed intelligence on the legal and policy developments shaping the AI era.