What States Are Deepfakes Illegal In? Is California One?
Key Takeaways
- Many U.S. states have made certain deepfakes illegal, but the rules vary widely by state and by use.
- Deepfakes tied to elections and non-consensual sexual content are the most commonly regulated.
- California has some of the strongest disclosure and civil-liability rules for synthetic media.
- There is still no single federal law that makes all deepfakes illegal.
- Proposed federal AI rules, including Trump’s “One Rule for AI,” signal a shift toward broader accountability, but state law still controls most injury claims today.
What Does It Mean for a Deepfake to Be “Illegal”?
Not all deepfakes are illegal. In the U.S., whether a deepfake crosses the legal line depends on how it is used, who is harmed, and which state’s law applies. Most laws do not ban AI-generated media outright. Instead, they target specific harms, such as deception, coercion, sexual exploitation, or election interference.
That distinction matters, especially in civil cases involving reputational damage, emotional distress, fraud, or personal injury.
For a free legal consultation, call (877) 735-7035
Which States Have Laws Targeting Election Deepfakes?
Several states have enacted laws aimed at preventing deepfakes from being used to manipulate elections or mislead voters.
These laws generally focus on timing, intent, and disclosure, rather than banning synthetic media outright.
Examples include:
- States that prohibit distributing materially deceptive synthetic media close to an election when intended to influence voters.
- States that require clear disclosures or disclaimers when AI-generated images, video, or audio are used in political content.
- States that criminalize the use of forged digital likenesses to defraud or intimidate during campaigns.
California falls squarely into this category, emphasizing transparency and disclosure rather than blanket bans.
How Does California Regulate Deepfakes?
California has taken a more nuanced approach than many states. Rather than criminalizing all deepfakes, California law focuses on:
- Requiring disclosures when synthetic media is used in political advertising
- Allowing civil remedies when AI-generated content causes harm
- Addressing impersonation and deceptive practices under existing fraud and unfair competition laws
This matters because many deepfake-related harms do not arise in elections at all. They arise in:
- Employment disputes
- Online impersonation
- Extortion or fraud
- Reputational attacks
- Emotional distress and harassment
California’s legal framework allows injured individuals to pursue civil claims, even when no criminal statute squarely applies.
Click to contact our personal injury lawyers today
Are Non-Consensual Sexual Deepfakes Illegal?
Yes, a growing number of states have passed laws making it illegal to create or distribute AI-generated sexual images or videos without consent.
These laws recognize that deepfakes can cause:
- Severe emotional distress
- Reputational harm
- Workplace consequences
- Long-term psychological injury
California already allows civil claims for non-consensual intimate imagery, and courts are increasingly willing to treat realistic AI-generated content the same way as manipulated real images.
For victims, this opens the door to personal injury–style claims, not just takedown requests.
Complete a Free Case Evaluation form now
Why Is There No Single Federal Deepfake Law?
Despite growing concern, there is still no comprehensive federal law that makes deepfakes illegal across the board.
Instead, the legal landscape is fragmented:
- States regulate specific harms
- Federal law addresses related issues indirectly, such as fraud, harassment, or civil rights violations
- First Amendment concerns limit how broadly speech can be regulated
That fragmentation is exactly why state law remains so important for victims seeking accountability today.
What Is Trump’s “One Rule for AI,” and Why Does It Matter?
The proposed “One Rule for AI,” expected to be part of a broader federal AI framework, reflects a growing recognition that AI accountability cannot rely on patchwork standards forever.
While details are still emerging, the core idea is to establish a single, overarching rule that places responsibility on the entity deploying AI systems, not just the developers.
If adopted, this would represent a shift away from today’s model, where:
- Harm is often blamed on “the algorithm”
- Responsibility is diffused across platforms, vendors, and users
- Victims struggle to identify who is legally accountable
For deepfake victims, that shift could eventually mean clearer paths to liability.
How Do Deepfake Laws Intersect With Personal Injury and Civil Liability?
Deepfake cases increasingly resemble traditional injury cases, even when the harm is digital.
Claims may involve:
- Emotional distress
- Defamation
- Fraud or impersonation
- Loss of employment or income
- Civil rights violations
In California, courts already analyze these harms through established legal doctrines. The fact that AI was involved only raises new questions about foreseeability, duty, and control.
What Should California Residents Know Right Now?
If you are harmed by a deepfake in Los Angeles, the most important questions are not abstract policy debates. They are practical ones:
- Was the content deceptive?
- Was consent required and absent?
- Did it cause real, measurable harm?
- Who deployed or benefited from the AI system?
State law still governs most of these answers today, even as federal AI rules continue to evolve.
Harm Caused by a Deepfake? Accountability Still Matters
Deepfakes are a liability issue, and as AI-generated content becomes more realistic and more accessible, the law must catch up. Whether the harm occurs through a political ad, a fake voice recording, or a fabricated image, the core legal question remains the same:
Who had the responsibility to prevent foreseeable harm, and failed to do so?
You focus on protecting yourself and your future. The law is increasingly focused on making sure someone answers for the damage done. J&Y Law Firm focuses on fighting for compensation on behalf of those who were injured. Schedule a free, confidential consultation to learn more about us and how can we help you.
Call or text (877) 735-7035 or complete a Free Case Evaluation form