California Enacts New Laws to Combat AI-Generated Deceptive Election Content

Skadden Publication / AI Insights

Stuart D. Levi Tyler Rosen Priya R. Matadar Shannon N. Morgan

California has enacted three new bills in an effort to curb the spread of misinformation and deceptive election content.

The bills — AB 2655, AB 2839 and AB 2355 — aim to strengthen protections against digitally altered media in political communications and advertisements. While the first two measures do not explicitly mention artificial intelligence (AI), the proliferation of AI deepfakes, including in connection with election content, was a clear driver behind their enactment.

Indeed, in his statement upon signing the bills on September 17, 2024, Gov. Gavin Newsom noted: “These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.”

Building on the foundation of California’s 2019 legislation (AB 730) that prohibited the distribution of manipulated videos, images or audio of political candidates within 60 days of an election, the new laws take a more comprehensive approach.

  • AB 2655 (Defending Democracy From Deepfake Deception Act of 2024) imposes certain removal obligations on large online platforms during the 120 days leading up to an election, along with disclosure requirements that extend beyond that period.
  • AB 2839 (Elections: Deceptive Media in Advertisements) more broadly prohibits the distribution of election communications containing certain materially deceptive content.
  • AB 2355 (Political Reform Act of 1974: Political Advertisements: Artificial Intelligence) mandates disclaimers for AI-generated advertisements created by political committees.

Since 2019, when Texas and California pioneered the regulation of digitally altered media in the political arena, 17 other states — including Florida, Michigan and New York — have enacted similar laws to address the rise of AI-generated election communications.

While only AB 2839 will be in effect for the 2024 presidential election, this set of legislation signals a robust response to concerns about the impact of AI-generated misinformation in future elections and could serve as a model for other states endeavoring to address similar concerns.

An Obligation To Identify and Remove Materially Deceptive Content

The Defending Democracy From Deepfake Deception Act of 2024 (Defending Democracy Act) places an obligation on large online platforms to implement “state-of-the-art” procedures to identify and remove materially deceptive content, as well as provide disclaimers regarding inauthenticity of such content during election periods.

A “large online platform” is defined as “a public-facing internet website, web application or digital application, including a social media platform as defined in Section 22675 of the Business and Professions Code, video sharing platform, advertising network, or search engine that had at least 1,000,000 California users during the preceding 12 months.”

Section 22675 of the Business and Professions Code generally defines a social media platform as any public or semipublic internet-based service or application with users in California that enables social interaction through the creation of profiles, populates a list of a user’s social connections, and allows content sharing, excluding services and applications that only offer email or direct messaging.

Under the Defending Democracy Act, large online platforms must provide an easily accessible way for California residents to report materially deceptive content (i.e., media that is digitally created or modified such that it would falsely appear to a reasonable person to be an authentic record of the content depicted).

Subject to certain limited exceptions, the provider must remove any reported materially deceptive content from its platform if such content is posted within 120 days of an election in California and contains false portrayals of:

  • candidates for elective office, if the false portrayal is reasonably likely to harm their reputation or electoral prospects,
  • election officials, if the false portrayal is in connection with the performance of their election-related duties and is reasonably likely to falsely undermine confidence in election outcomes, or
  • elected officials doing or saying something that influences an election in California, if the false portrayal is reasonably likely to falsely undermine confidence in election outcomes.

For any reported materially deceptive content that meets the above criteria but was posted outside the 120-day period, or that appears in an advertisement or election communication not subject to the foregoing, the platform must include a disclaimer stating, “This [image/audio/video] has been manipulated and is not authentic.”

Further, the large online platforms are not obligated to remove digitally altered content posted by a candidate for elective office who, during the election period, portrays themself as doing or saying something that they did not do or say, if the digital content includes a disclosure stating the following: “This [image/audio/video] has been manipulated.”

Liability of the platform under the Defending Democracy Act requires:

  • knowledge of the materially deceptive content’s inauthenticity, or
  • action with reckless disregard for the truth.

If a platform fails to remove or label the content in accordance with the law, state officials can pursue injunctive relief.

Notably, the Defending Democracy Act includes several exemptions, including for regularly published online newspapers, magazines and broadcasting stations, as well as for satire or parody.

This law will go into effect on January 1, 2025.

Prohibition on Materially Deceptive Election Communications

AB 2839 prohibits the distribution of campaign advertisements and other election communications containing materially deceptive content during the election period. This includes false portrayals of:

  • candidates for federal, state or local elected office in California, including presidential and vice presidential candidates, if the false portrayal is reasonably likely to harm their reputation or electoral prospects,
  • election officials, if the false portrayal is reasonably likely to falsely undermine confidence in election outcomes,
  • elected officials, if the false portrayal is in connection with the performance of their election-related duties and is reasonably likely to harm their reputation or electoral prospects, or is reasonably likely to falsely undermine confidence in election outcomes, or
  • voting machines, ballots, voting sites or other election-related property or equipment, if the false portrayal is reasonably likely to falsely undermine confidence in election outcomes.

Consistent with the standards set forth in the Defending Democracy Act, liability under AB 2839 requires knowledge of the materially deceptive content’s inauthenticity or action with reckless disregard for the truth.

Recipients of materially deceptive content, candidates or election officials may seek injunctive relief or bring an action for general or special damages against any person that distributed or republished the content, except for broadcasting stations and internet websites that did not also create the content.

AB 2839 includes several exemptions similar to those provided under the Defending Democracy Act. However, AB 2839 does not impose liability on an interactive computer service and, although the Defending Democracy Act explicitly exempts satire and parody, AB 2839 provides that such content must nonetheless contain a disclaimer stating, “This [image/audio/video] has been manipulated for purposes of satire or parody.”

This law went into effect on September 17, 2024.

Disclaimer Requirement for AI-Generated Political Ads

AB 2355 explicitly applies to AI-generated advertisements. The law defines “artificial intelligence” as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

AB 2355 mandates that any political advertisement created, originally published or originally distributed by a political committee must include a clear and conspicuous disclaimer if any content in the advertisement was generated or substantially altered using AI. The required disclosure must state, “This ad was generated or substantially altered using artificial intelligence.”

If a committee does not comply with the law, the Fair Political Practices Commission may seek injunctive relief or pursue any administrative or civil remedies available under certain sections of Chapter 3 or Chapter 11 of the California Government Code.

This law will go into effect on January 1, 2025.

A Growing Patchwork of State Laws

Although California law now addresses deepfakes and AI in elections, federal law and the law of most states currently do not do so directly. For example, on September 19, 2024, the Federal Election Commission opted not to proceed with a rulemaking on AI in campaign ads, but instead issued an interpretive rule affirming that the existing statutory prohibition on the fraudulent misrepresentation of campaign authority applies irrespective of the technology used.

However, unlike California’s AI laws, that federal prohibition generally only applies to communications made by federal candidates or their agents.

As more states pass legislation addressing AI-generated election communications, social media and video sharing platforms, search engines and other companies that may see such communications distributed on their platforms will need to monitor and comply with the growing patchwork of state laws.

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP