US Federal Regulation of AI Is Likely To Be Lighter, but States May Fill the Void

Skadden’s 2025 Insights

Ken D. Kumayama Stuart D. Levi William E. Ridgway

Key Points

  • President-elect Trump appointed David Sacks, a venture capitalist, as the White House AI and crypto czar.
  • The Trump administration is likely to adopt a light regulatory approach to AI development and deployment, and may repeal some or all of President Biden’s executive order on AI, as was promised in the Republican Party platform.
  • We are unlikely to see omnibus federal AI legislation, creating a void that states are likely to continue to step into with their own state-specific regulations.
  • Despite being critical of the CHIPS and Science Act during the campaign, the Trump administration is not expected to seek to repeal or materially change that law.


The development and deployment of artificial intelligence (AI) systems stand to be the most significant technological advancement in the coming years. Yet while AI adoption is top of mind for most company executives and boards, AI regulation received scant attention during the presidential campaign.

Overall, we expect a light regulatory touch by the Trump administration with respect to AI. However, as discussed below, individual states may continue to step into the void and enact their own AI legislation.

Appointment of AI Czar

President-elect Donald Trump appointed David Sacks, a venture capitalist and an early executive at PayPal, as the White House AI and crypto czar. Many expect that given Sacks’ venture capitalist background, he will bring a pro-innovation, pro-startup approach to the AI sector, including with respect to regulation. This may mesh well with President-elect Trump’s agenda, given that in his announcement appointing Sacks, the president-elect said that Sacks will move the government away from “big tech bias and censorship.”

In that announcement, the president-elect also said that Sacks would “work hard on a legal framework so the Crypto industry has the clarity it has been asking for,” but he made no corresponding statement regarding AI regulation. It is too early to tell whether this was more of nod to the crypto industry or a careful statement that there would not be a push for a legal framework for AI. (See “Cryptocurrencies Stand To Gain From New Regulators and a Receptive Congress.”)

The Future of the Biden Executive Order

In October 2023, President Joe Biden issued a broad executive order on AI (AI Order), which the administration touted as a vehicle to establish AI safety and security standards while protecting privacy, advancing civil rights and promoting innovation. However, most of the AI Order was a series of directives to various federal agencies to study and prepare reports on the impact of AI, and in certain cases to issue guidance on safe AI adoption.

It remains to be seen which parts of the AI Order President-elect Trump will repeal, especially since some aspects enjoyed bipartisan support.

The AI Order also invoked the Defense Production Act (DPA) to require companies developing any AI foundation model that poses a serious risk to national security, national economic security, or national public health and safety to notify the federal government when training such a model, and to share the results of all red-team safety tests (i.e., tests within a controlled environment to discover flaws and vulnerabilities in an AI system).

The Republican Party platform vowed to “repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”

It remains to be seen which parts of the AI Order President-elect Trump will repeal, especially since some aspects — such as the guidelines dealing with national security — enjoyed bipartisan support. However, Republicans criticized the requirements imposed on AI developers through invocation of the DPA as too proscriptive and anti-innovation, and these requirements may therefore be a target for repeal.

A November 2020 memo issued by the Office of Management and Budget (OMB) at the end of President-elect Trump’s first term also indicates that the incoming Trump administration will likely opt for a lighter regulatory approach. The memo, “Guidance for Regulation of Artificial Intelligence Applications,” adopted an innovation-friendly approach to AI: “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”

We also expect the Trump administration to focus less on bias and discrimination issues as they relate to AI. As just one example, the OMB Guidance to the AI Order proposed that agencies establish safeguards when assessing AI that take into account, among other matters, the impact of AI on factors contributing to algorithmic discrimination and disparate impacts, and ensure that AI advances equity, dignity and fairness.

The Trump administration may conclude that those requirements need not be included in any agency AI safeguards.

More generally, certain AI-related enforcement priorities are likely to be scaled back at the federal level, including scrutiny by the Federal Trade Commission (FTC), which, during the Biden administration, targeted the deceptive and unfair use of AI in several enforcement actions and sweeps, and used the remedy of “algorithmic disgorgement” (the enforced deletion of algorithms developed using illegally collected data) in a number of actions. (See our January 2, 2024, client alert “Proposed FTC Order Suggests Blueprint for AI Adoption.”)

AI Legislation

It is difficult to imagine that omnibus federal AI legislation will be enacted in the near term given the lack of consensus, even within each party, as to what such legislation should look like.

However, two sets of narrower AI bills currently pending in Congress enjoy bipartisan support and may be an early bellwether for how the next four years will play out:

  • The AI Advancement and Reliability Act (H.R. 9497) and the Future of Artificial Intelligence Innovation Act (S. 4178), which would authorize the establishment of the AI Safety Institute, a group within the National Institute of Standards and Technology (NIST) focused on evaluating, testing and developing guidelines for AI models.
  • The CREATE AI Act (H.R. 5077; S. 2714), which would make permanent the National Science Foundation’s National AI Research Resource pilot program. The program is currently scheduled to run through January 2026 and provides tools for AI research.

While these bills have all advanced out of committee, it remains to be seen if the Trump administration will support any of them or seek any modifications.

The Role of the States

In the absence of omnibus federal legislation or regulation of AI, we expect to see states take an even more active role in enacting state-specific AI regulations. These will likely range from laws such as Colorado’s comprehensive AI law to targeted legislation, such as the Tennessee law protecting against deepfakes.

We also may see states adopt a more aggressive approach to regulating the use of automated decision-making technology. This might include laws regarding:

  • The testing that AI developers need to conduct before releasing certain models.
  • Disclosures developers may be required to make regarding the safety of their models.
  • Obligations on those deploying AI models to let users know that they are interacting with an AI model.
  • Limits on how AI can be used.

The Role of Elon Musk

One wild card in trying to assess AI policy under a second Trump administration is the role that Elon Musk will play with respect to such policies. Musk, who to date has emerged as a trusted adviser to President-elect Trump and has been appointed to help lead the planned advisory Department of Government Efficiency, has long expressed concerns about the unchecked power of AI and supported a California law that would have imposed various obligations on developers of advanced AI. That bill was vetoed by Gov. Gavin Newsom.

However, Musk has also formed his own AI company, xAI, which he has said will not have any guardrails against disinformation and hate speech. He has also criticized other AI companies as having a liberal bias. The AI policy views that Musk articulates to Trump therefore may help to shape the Trump administration’s posture on AI.

(See also “Rising Investment in AI Requires Financial Sponsors To Address Unique Risks.”)

See the full 2025 Insights publication

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP