Developing and Using AI Require Close Monitoring of Risks and Regulations

Skadden Insights – September 2024

Ken D. Kumayama Stuart D. Levi William E. Ridgway David A. Simon Nicola Kerr-Shaw Susanne Werry Jacob F. Bell

Key Points

  • As AI systems become more complex, companies are increasingly exposed to reputational, financial and legal risks from developing and deploying AI systems that do not function as intended or that yield problematic outcomes.
  • The risks of AI, and the legal and regulatory obligations, differ across industries, and depend on whether the company is the developer of an AI system or an entity that deploys it.
  • Companies must also navigate a quickly evolving regulatory environment that does not always offer consistent approaches or guidance.

Key AI Safety Risks: People, Organizations, Supply Chains and Ecosystems

In the U.S., there is no omnibus law governing artificial intelligence (AI). However, the National Institute of Standards and Technology (NIST), a Department of Commerce agency leading the U.S. government’s approach to AI risk, has a “Risk Management Framework” suggesting that AI be evaluated at three levels of potential harm:

  • Harm to people (i.e., harm to an individual’s civil liberties, rights, physical or psychological safety, or economic opportunity), such as deploying an AI-based hiring tool that perpetuates discriminatory biases inherent in past data.
  • Harm to organizations (i.e., harm to an organization’s reputation and business operations), such as using an AI tool that generates erroneous financial reports that were not properly reviewed by humans before being publicly disseminated.
  • Harm to ecosystems (i.e., harm to the global financial system or supply chain), such as deploying an AI-based supply management tool that functions improperly and causes systemic supply chain issues that extend far beyond the company that deployed it.

Companies may be subject to some or all of these AI safety risks, which often overlap. As a result, management should stay informed about the development and deployment of AI systems within their companies, the AI regulatory landscape, and the benefits and risks of each use of an AI system.

It is also vital for companies to reassess AI systems that have been in use for a number of years, in light of the increased focus on risks by regulators and the general public.

The Current AI Regulatory Landscape

United States

Although at present there are no proposals to adopt overarching AI legislation in the U.S., the federal government has issued a series of reports, general guidance and frameworks emanating from an October 2023 AI Executive Order (EO). A July 2024 statement from the White House provides a useful summary of these reports and frameworks.

Particularly relevant is NIST’s suite of AI risk management tools, which includes:

Federal agencies and regulators have also made clear that existing laws apply to AI systems. For example, the Federal Trade Commission (FTC) has brought a number of actions and made several statements regarding AI deployments based on its authority to protect against “unfair or deceptive acts or practices.”

Companies should be aware of a growing number of AI laws at the state level:

  • The Utah Artificial Intelligence Policy Act imposes disclosure requirements on entities using generative AI tools for customer interactions. The law went into effect on May 1, 2024.
  • The Colorado Artificial Intelligence Act is designed to protect against algorithmic discrimination and imposes various disclosure and risk assessment obligations on companies developing or deploying AI systems that make “consequential decisions” involving areas such as financial services, health and education. The law will go into effect on February 1, 2026.
  • On August 9, 2024, Illinois passed HB 3773, which amends the Illinois Human Rights Act by prohibiting employers from using AI if it has a discriminatory effect on employees based on protected classes (or proxies such as zip codes), and requiring that employers give notice if they use AI for certain employment-related purposes such as hiring, promotion and discipline.

European Union and United Kingdom

The European Union has taken a more direct and risk-based approach to AI regulation than the U.S. The EU’s landmark AI Act — which came into force on August 1, 2024, and will be fully effective starting on August 2, 2026 — governs all AI models marketed or used within the EU.

The law creates four tiers of AI systems based on the risk they present:

  • Unacceptable (which are prohibited)
  • High
  • Limited
  • Minimal

The risk categories carry with them various risk assessment, disclosure and governance obligations.

While these categories and the specific compliance requirements will be further clarified through guidance, companies that are — or may be — marketing or using AI models in the EU should stay informed about the EU AI Act and their approach to compliance.

In addition, European privacy regulators have already stepped in to use existing privacy laws to block the roll-out of generative AI products in Europe, and have launched court actions against companies that seek to develop AI models without approval from privacy regulators.

The U.K. does not yet have any laws that mirror the EU’s AI Act, but it recently announced its intention to develop AI safety legislation. The U.K.’s privacy regulator, the Information Commissioner’s Office, has launched enforcement actions against AI companies that fail to complete risk assessments before deploying AI-powered products.

Guiding Principles for AI Risk Management

There are several guiding principles to keep in mind when managing AI safety risk.

  1. Understand the company’s AI risk profile. Management should have a solid understanding of how the company develops and deploys AI. Taking stock of a company’s risk profile can help them identify the unique safety risks that AI tools may pose.
  2. Be informed about the company’s risk assessment approach. Questions to ask include whether an AI tool has been tested for safety, accuracy and fairness before deployment, and what role human oversight and human decision-making play in its use. Where the level of risk is high, the company may want to consider whether the benefits of developing or deploying the AI system outweigh the risks.
  3. Establish an AI governance framework. The company should adopt a framework to manage AI risk, making sure it is properly implemented and monitored. It may want to consider adopting existing, widely recognized frameworks such as the NIST AI Risk Management Framework, which is regularly updated with companion documents to help companies implement it (see NIST’s website).
  4. Regularly review and update policies and processes. Given the rapid pace of technological and regulatory developments in the AI space — and the ongoing discovery of new risks from deploying AI — the company should ensure it regularly reviews and updates its approach to identifying and managing AI-related risk. The company should start creating the necessary infrastructure (i.e., principles, processes and personnel) to perform impact assessments, maintain technical documentation, conduct annual system reviews, and report any adverse or discriminatory findings of high-risk AI systems. The company should implement regular trainings for relevant personnel as these processes are updated.
  5. Stay informed about sector-specific risks and regulations. In light of how quickly the technology and its uses are evolving, the company will want to stay abreast of risks and regulations specific to its industry.

A version of this article was first published in the summer 2024 edition of The Informed Board.

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP