Matthew McConaughey Draws a Line to Protect His Voice and Image From AI
The move highlights growing efforts by public figures to control how artificial intelligence uses their likeness, affecting artists, advertisers, and consumers exposed to AI-generated media.
Actor Matthew McConaughey has taken formal legal steps to protect his image and voice from unauthorized use by artificial intelligence systems, according to U.S. trademark records and a statement from his representatives.
The filings, submitted through the commercial arm of the Just Keep Livin Foundation, appear in the public database of the United States Patent and Trademark Office and include recorded audio and visual materials associated with the Oscar-winning performer.
The registrations were confirmed on Wednesday and mark one of the most proactive uses of federal trademark law by a major Hollywood actor in response to generative AI.
The development comes as AI tools capable of producing realistic voice clones and digital likenesses are increasingly used in advertising, entertainment, and online content.
While several U.S. states have enacted laws targeting malicious deepfakes, legal protections against non-consensual but non-criminal uses remain limited.
Trademark law, which focuses on preventing consumer confusion and unauthorized commercial exploitation, has emerged as one of the few immediately available legal mechanisms for public figures seeking control over how their identities are used in AI-driven media.
What McConaughey Actually Filed And What It Protects
According to publicly accessible USPTO records, the filings consist of sound and image-based trademarks rather than patents. Trademarks are designed to protect symbols, names, sounds, or images that identify the source of goods or services in commerce.
In McConaughey’s case, the filings cover specific audio recordings of his voice and visual representations of his likeness that could be used to signal endorsement or origin in commercial contexts.
Unlike copyrights, which protect creative works, or patents, which protect inventions, trademarks focus on consumer protection.
If approved, these registrations would allow McConaughey or his representatives to challenge unauthorized commercial uses of AI-generated content that could mislead consumers into believing he endorsed a product, service, or platform.
The filings were made through an entity connected to the Just Keep Livin Foundation, the nonprofit McConaughey co-founded in 2008 with his wife, Camila Alves McConaughey.
While the foundation itself focuses on youth health and wellness programs, its commercial arm has been used for brand management and licensing activities tied to McConaughey’s professional identity.
This strategy does not grant exclusive ownership over McConaughey’s face or voice in all contexts. Editorial use, parody, and non-commercial expression are generally protected under U.S. law.
However, it can strengthen enforcement options when AI-generated likenesses are used in paid advertising or other revenue-generating activities.
How Trademark Law Applies to AI-Generated Likenesses
Under U.S. law, trademark infringement occurs when a protected mark is used in a way that is likely to cause consumer confusion about the source or endorsement of goods or services.
Courts have previously recognized sound marks, including distinctive audio cues and voices, as eligible for protection if they function as identifiers.
AI complicates this framework by enabling the creation of synthetic voices or images that closely resemble real individuals without copying a specific copyrighted recording.
By registering authenticated samples of his voice and image, McConaughey establishes a clear reference point for what constitutes his protected identity in commerce.
Legal experts note that this approach may be particularly relevant in advertising, where AI-generated celebrity likenesses could be used to promote products without consent.
In such cases, trademark claims can complement state-level right-of-publicity laws, which vary widely across jurisdictions.
At the federal level, there is no comprehensive statute specifically governing AI-generated likenesses.
As a result, creators and performers often rely on a patchwork of trademark, copyright, and state publicity rights to protect their interests.
Statements and Reaction From McConaughey’s Representatives
Attorney Kevin Yorn, who represents McConaughey, said the filings are intended to ensure clients receive protections comparable to those available to their businesses.
He added that the strategy is also designed to allow artists to participate in the economic value generated by new technologies rather than simply reacting to misuse after it occurs.
The comments reflect a broader shift in how entertainers and their legal teams approach AI. Rather than opposing the technology outright, many are seeking frameworks that allow licensed, consensual use while preventing unauthorized exploitation.
McConaughey himself has previously expressed openness to technological innovation. He has taken an ownership stake in ElevenLabs, a startup specializing in AI voice synthesis, and the company has produced an AI-generated audio model of his voice with his explicit permission.
That arrangement underscores the distinction between licensed AI use and unapproved replication.
Industry reaction has been mixed but attentive. Performers’ unions, including SAG-AFTRA, have raised AI-related concerns in recent contract negotiations, particularly around digital replicas and voice cloning.
While no union statements were directly tied to McConaughey’s filings, the move aligns with ongoing industry efforts to clarify consent standards.
How AI Likeness Disputes Affect Artists and Consumers
Concerns over AI-generated likenesses have moved from theory to real-world disputes.
In 2023, actress Scarlett Johansson brought legal action against the developer of the Lisa AI app after an advertisement featured an AI-generated avatar that closely resembled her without consent.
The case drew broad attention because it showed how easily AI tools could be used to imply celebrity involvement or endorsement in commercial content, even when no permission had been given.
Similar issues have since been reported by other performers who discovered synthetic versions of their voices appearing in online videos, audiobooks, or advertisements.
Some cases involve clear deception or commercial misuse, while others fall into legal gray areas where existing laws offer limited or inconsistent remedies.
In response, several U.S. states have updated right-of-publicity statutes to address digital replicas, though protections vary widely by jurisdiction. At the federal level, lawmakers have discussed proposals, but no comprehensive nationwide framework governing AI-generated likenesses has been enacted.
For consumers, the spread of AI-generated voices and images raises questions about authenticity and trust, particularly in advertising.
Celebrity endorsements can influence purchasing decisions, and synthetic media can blur the line between genuine promotion and fabricated association.
Trademark-based protections are designed to reduce misleading commercial content by giving rights holders clearer grounds to challenge unauthorized uses.
While these tools cannot eliminate all deceptive AI media, they may discourage advertisers and platforms from using unlicensed likenesses and reinforce the importance of transparency as AI-generated content becomes more common across digital platforms.
Data and Regulatory Context Shaping the Issue
According to USPTO data, applications for non-traditional trademarks, including sound marks, have increased steadily over the past decade as branding strategies expand beyond logos and names.
This trend reflects broader recognition that voices and audiovisual cues can function as commercial identifiers.
At the state level, Tennessee enacted the Ensuring Likeness Voice and Image Security (ELVIS) Act in 2024, expanding protections for musicians against unauthorized AI voice cloning.
The law was designed to address gaps in existing right-of-publicity statutes and has been cited as one of the most targeted state responses to AI impersonation.
Federally, the U.S. Copyright Office has launched studies examining how AI-generated content interacts with existing copyright law, while members of Congress have introduced bills aimed at transparency and disclosure.
None, however, create a unified framework for likeness protection.
Practical Limits of the Trademark Approach
While trademark filings can strengthen enforcement, they are not a universal solution. Trademarks apply primarily to commercial uses and require proof of likelihood of confusion.
Non-commercial speech, satire, and certain artistic uses are generally protected by the First Amendment.
Additionally, trademark enforcement often requires monitoring and legal action, which can be costly and time-consuming. Smaller creators may lack the resources to pursue similar strategies, raising questions about unequal access to protection.
Nevertheless, for high-profile figures with established commercial brands, trademarks can serve as a deterrent and a signal to advertisers and platforms that unauthorized use will be challenged.
Key Questions Answered
Is Matthew McConaughey patenting his image or voice?
No. The filings are trademark registrations, not patents. Trademarks are used to protect identifiers in commerce, such as names, images, or sounds, rather than granting ownership over a person’s likeness or identity.
Can AI legally copy a celebrity’s voice or image?
In some cases, yes. U.S. law allows certain non-commercial, editorial, or expressive uses, but unauthorized commercial use that implies endorsement can trigger trademark or publicity-rights claims.
Why are trademarks being used instead of AI-specific laws?
There is no comprehensive federal law regulating AI-generated likenesses. Trademarks offer an existing legal pathway to challenge misleading commercial uses while broader AI regulation remains under discussion.
Have other celebrities challenged AI likeness use?
Yes. Several performers have taken legal action or raised disputes over unauthorized AI-generated voices or images, particularly in advertising. Most cases rely on existing trademark, publicity, or consumer-protection laws.
Does this affect everyday people using AI tools?
Generally no. Personal or non-commercial use typically falls outside trademark enforcement unless the content is sold, monetized, or suggests a real person’s endorsement.
What the Trademark Review Will Involve
The applications will now be examined by the U.S. Patent and Trademark Office under its normal review process.
That includes assessing whether the voice and image materials genuinely function as identifiers tied to commercial activity and checking for conflicts with existing trademarks.
Examiners can ask for additional clarification, and once the review reaches the publication stage, other parties are allowed to file formal objections during a public opposition window.
If the registrations are approved, they can be used to challenge unauthorized commercial uses that suggest endorsement or affiliation. The filings do not change any laws and are handled entirely within existing trademark rules.
Taken together, the process reflects how established intellectual property systems are being stretched to address new realities created by generative AI.
As voices and images become easier to replicate and distribute, questions about consent and authenticity are no longer limited to celebrities but affect advertisers, platforms, and consumers who rely on clear signals of what is real.
With no single federal law governing AI-generated likenesses, decisions made through trademark reviews and related disputes are increasingly shaping how identity and trust are protected in the digital marketplace.



















