AI Safety: The Role of the Board in Assessing and Managing AI Risk

Skadden Publication / The Informed Board

Ken D. Kumayama Stuart D. Levi William E. Ridgway David A. Simon Nicola Kerr-Shaw Susanne Werry Jacob F. Bell

Key Points

  • As AI systems become more complex, companies are increasingly exposed to reputational, financial and legal risk from developing and deploying AI systems that do not function as intended or that yield problematic outcomes. The range of potential risks is wide and can include fostering discriminatory practices, causing products to fail, and generating false, misleading or harmful content.
  • The risks of AI, and the legal and regulatory obligations, differ across industries, and depending on whether the company is the developer of an AI system or the entity that deploys it — a line that may be difficult to draw.
  • Boards must also navigate a quickly evolving regulatory environment that does not always offer consistent approaches or guidance. 

The Informed Board Cover image

Key AI Safety Risks: People, Organizations, Supply Chains and Ecosystems

The National Institute of Standards and Technology (NIST), a Department of Commerce agency leading the U.S. government's AI risk management approach, suggests that AI risk be evaluated at three levels of potential harm:

  • Harm to people (i.e., harm to an individual’s civil liberties, rights, physical or psychological safety or economic opportunity), such as deploying an AI-based hiring tool that perpetuates discriminatory biases from past data.
  • Harm to organizations (i.e., harm to an organization’s reputation and business operations), such as using an AI tool that generates erroneous financial reports that were not properly reviewed by humans before being publicly disseminated.
  • Harm to ecosystems (i.e., harm to the global financial system or supply chain), such as deploying an AI-based supply management tool that functions improperly and causes systemic supply chain issues that extend far beyond the company that deployed it.

Companies may be subject to some or all of these AI safety risks, which often overlap.

Boards should be informed about the developments and deployment of AI systems within their companies, the AI regulatory landscape to which their companies are subject, and the benefits and risks of each use of an AI system. Boards should also reassess AI systems that may have been in use at the company for a number of years, in light of the increased focus by regulators and the general public.

AI Mentions. Accessible content available below. 

The Current AI Regulatory Landscape

United States

To date, the U.S. has not enacted any omnibus AI legislation, and there is none on the immediate horizon. Instead, the federal government has issued a series of reports, general guidance, and frameworks emanating from an October 2023 AI Executive Order (EO). A July 2024 statement from the White House provides a useful summary of these reports and frameworks.

Of most relevance to boards is a suite of AI risk management tools published by NIST. This includes an AI Risk Management Framework, guidelines on Managing Misuse Risk for Dual-Use Foundation Models and a Risk Management Profile on Generative AI. A complete list of NIST statements and publications on AI can be found at the NIST Trustworthy and Responsible AI Resource Center.

While there is no omnibus federal AI law, federal agencies and regulators have made clear that existing laws apply equally to AI systems. For example, the Federal Trade Commission has brought a number of actions and made a number of statements regarding AI deployments based on its authority to protect against “unfair or deceptive acts or practices.”

Boards also need to be cognizant of a growing number of state-specific AI laws. For example, Utah enacted the Utah Artificial Intelligence Policy Act, which imposes disclosure requirements on entities using generative AI tools for customer interactions. The law went into effect in May 2024.

Also in May 2024, Colorado enacted the Colorado Artificial Intelligence Act, which is designed to protect against algorithmic discrimination and imposes various disclosure and risk assessment obligations on companies developing or deploying AI systems that make “consequential decisions” involving areas such as financial services, health, and education. The law will go into effect on February 1, 2026.

European Union and United Kingdom

The EU has taken a more direct and risk-based approach to AI regulation than the United States. The EU’s landmark AI Act — which came into force on August 1, 2024, and will be fully effective from August 2, 2026 — governs all AI models marketed or used within the EU. The law creates four tiers of AI systems based on the risk they present: unacceptable (which are prohibited), high, limited and minimal. The risk categories carry with them various risk assessment, disclosure and governance obligations.

While these categories and the specific compliance requirement will be further clarified through guidance, boards whose companies are, or may be, marketing or using AI models in the EU should stay informed about the EU AI Act and their organizations’ approach to compliance.

In addition, European privacy regulators have already stepped in to use existing privacy laws to block the roll-out of generative AI products in Europe, and have launched court actions against companies that seek to develop AI models without approval from privacy regulators.

While the U.K. does not yet have any laws that mirror the EU AI Act, the new Labour government recently announced its intention to develop AI safety legislation, and its privacy regulator, the Information Commissioner's Office, has launched enforcement actions against AI companies that fail to complete risk assessments before deploying AI-powered products.

Generative AI Mentions 2024. Accessible content available below.

Guiding Principles for AI Corporate Governance

In general, there are several guiding principles boards should keep in mind to effectively navigate AI corporate governance and manage AI safety risk.

  1. Understand the company’s AI risk profile. Boards should have a solid understanding of how AI is developed and deployed in their companies. Taking stock of a company's risk profile can help boards identify the unique safety risks that AI tools may pose.
  2. Be informed about the company’s risk assessment approach. Boards should ask management whether an AI tool has been tested for safety, accuracy and fairness before deployment, and what role human oversight and human decision-making making play in its use. Where the level of risk is high, boards should ask whether an AI system is the best approach, notwithstanding the benefits it may offer.
  3. Ensure the company has an AI governance framework. The board should ensure that the company has such a framework to manage AI risk, and then reviews it periodically to make sure it is being properly implemented and monitored, and to determine the role the board should have in this process.
  4. Conduct regular reviews. Given the rapid pace of technological and regulatory developments in the AI space, and the ongoing discovery of new risks from deploying AI, the board should consider implementing regular reviews of the company’s approach to AI, including whether new risks have been identified and how they are being addressed.
  5. Stay informed about sector-specific risks and regulations. Given how quickly the technology and its uses are evolving, boards should stay informed about sector-specific risks and regulations in their industry.

View other articles from this issue of The Informed Board

See all the editions of The Informed Board

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP