Rising Investment in AI Requires Financial Sponsors To Address Unique Risks

Skadden’s 2025 Insights

Christopher M. Barlow Brett J. Fleisher David A. Simon Nicola Kerr-Shaw Melissa Muse Taylor N. Votek

Key Points

  • Accelerated M&A activity by financial sponsors is expected in the near term due to improved market conditions and deregulation under the Trump administration.
  • With the rapid development of new AI use cases, particularly relating to generative AI, many financial sponsors are searching out AI-related investment opportunities.
  • Regulation applying to AI development and use is proliferating at a rapid rate and becoming more complex, particularly across industries and jurisdictions. Investments in AI-focused targets could be devalued if certain laws and regulations are not followed.
  • Financial sponsors will therefore need to engage in a sophisticated analysis of any AI-focused target’s regulatory compliance, not only to ascertain current compliance but also to ensure any plans for developing the AI — whether developing the tool itself or deploying it to new use cases or markets — will be legally compliant.
  • In addition, financial sponsors need clear policies, procedures and guardrails to mitigate risk within their own operations and within portfolio companies.


Donald Trump’s return to the White House is expected to result in increased M&A activity by financial sponsors. With anticipated interest rate cuts and waning inflation, sponsors should have access to stronger capital markets and cheaper capital. There is substantial pent-up demand for deals in the form of committed capital ready to be deployed, and past investments — whose exits were often delayed — are now ripe for sales or IPOs. (See “Resilient Economy and Promises of Lessened Regulation, Lower Taxes Raise Hopes for a Surge in M&A” and “Betting on the ‘Trump Trade’ To Make the Capital Markets Great Again.”)

The value of private equity investments in artificial intelligence (AI) more than doubled in 2023 and continued growing in 2024. We expect the upward trend to last for the foreseeable future. (See “US Federal Regulation of AI Is Likely To Be Lighter, but States May Fill the Void.”)

President-elect Trump has been vocal about protecting U.S. jobs and industry, including the view that it is critical for the U.S. to lead the race in developing the strongest AI algorithms. His administration’s support of AI development may encourage financial sponsors to invest in AI-related companies in the expectation of government incentives.

Although we anticipate continued scrutiny of investments into the U.S. in AI technology, the desire to attract foreign capital to the U.S. should remain, which may lead to less stringent regulations on foreign inbound investment from most countries other than China.

In addition, the Trump administration is expected to return to more traditional merger reviews. Sponsors may therefore perceive the risk of governmental intervention to be lower. However, given the sensitivity of AI technology and bipartisan support for antitrust enforcement in the technology sector, AI transactions will likely continue to face heightened regulatory scrutiny. This is also the case for AI technology deployed, and/or developed using data from, outside of the U.S.

Additionally, there may be increased scrutiny on add-on transactions by portfolio companies, which could be challenging for financial sponsors looking to grow businesses through inorganic growth. (See “Keep Your Seatbelts Fastened: The Wild Antitrust Ride May Not Be Over.”)

Key Considerations for Financial Sponsors Investing in AI

AI technologies present a distinct set of challenges that financial sponsors must navigate both with respect to their internal policies and external portfolio company management, and also in relation to valuing potential targets. These challenges require careful consideration and management.

Global regulatory frameworks. AI technology is constantly evolving and often outpaces the implementation of regulatory frameworks. The use of AI tools must comply with a range of existing legal and regulatory requirements, including international, jurisdictional, federal and state data protection and anti-discrimination laws. Certain data protection laws may also require businesses to offer consumers the ability to opt out of the use of AI for certain consequential decision-making, including hiring, housing and financial decisions, or indeed to request human intervention in relation to a decision made by AI that affects them.

Financial sponsors should consistently monitor legal and regulatory developments as well as evolving industry best practices to ensure compliance not only by the financial sponsor itself but also by its existing and prospective portfolio companies. For example, the EU’s Digital Services Act and AI Act address instances where the use of AI may result in hallucinations, or false information. The increasing adoption of AI technologies is also relevant to the application of the EU’s Digital Markets Act (DMA). Gatekeepers of core platform services must comply with the obligations set out in the DMA when they integrate AI systems and address how AI systems determine the conduct covered by the DMA provisions.

Effective and strategic governance. While there is an ongoing need to consider global regulatory frameworks, it is also important to protect agile innovation within businesses. Limited legal resources within sponsors, portfolio companies and targets mean that legal teams are not able to quickly review every AI use case. In addition, an in-depth review in more simple use cases could strain those limited resources. Financial sponsors should therefore implement strategic governance frameworks within their own entity and at the portfolio company level, ensuring appropriate legal and compliance oversight of AI. Low-risk AI could be allowed to develop quickly, realizing efficiencies, whereas higher-risk AI (including higher-risk use cases of simple AI) should go through more in-depth internal reviews, given the heightened legal and regulatory scrutiny. This type of approach should also be considered with any potential targets.

Cybersecurity, confidentiality and privacy. AI tools rely on data, which can include confidential business information as well as sensitive or personal data, to perform effectively. The use of personal data to train AI models and the processing of personal data through AI tools must comply with use and disclosure requirements under privacy laws (e.g., the California Consumer Privacy Act and the EU’s General Data Protection Regulation). U.S. regulators have made clear that businesses must adequately disclose the use and sharing of personal data to train AI models or risk regulatory investigation and possible compelled deletion of underlying algorithms. EU law, meanwhile, specifies that businesses do not have the automatic right to train AI models on personal data they control. Contractual provisions may also restrict a financial sponsor’s use of investor and client confidential information to train AI models. The web of commercial and regulatory considerations may require financial sponsors, or portfolio companies, to update privacy policies, issue notices to clients or investors, revise contracts and possibly seek consent to the use of data or provide certain opt-out rights.

Ethical use of AI. AI tools are only as good as the data they are trained on, and if the data contains biases, the resulting AI tool will as well. Financial sponsors should establish procedures to effectively audit their AI tools, and those of their portfolio companies, for unfair or biased results and ensure steps are taken to mitigate partiality or other potential harms (including breach of law and regulation). This process may include establishing bias assessment protocols and mitigation measures that are consistent with applicable regulatory requirements (e.g., the Colorado Artificial Intelligence Act) and industry standards. Financial sponsors’ due diligence of a potential target should ensure that the target conducts regular bias audits and has implemented corrective measures to address any identified biases.

Quality control and integration. While AI investment can boost operational efficiencies for financial sponsors, their portfolio companies and potential targets, it also presents challenges relating to the accuracy of AI outputs and the integration of AI into existing information technology (IT) infrastructure. Financial sponsors, as well as potential target companies, should consider carefully assessing the reliability of data sources and ensure that data input into AI tools is validated and verified, potentially by human review (which is sometimes legally mandated), before it is used in key business operations or to inform external guidance to third parties. Financial sponsors may also ensure they have the proper systems, tools, IT infrastructure and personnel to integrate and maintain their AI tools. When contemplating investment opportunities, sponsors should consider the integration process early on to ensure effective use and maximization of AI technology.

Conclusion

Given the likely increase in M&A activity by financial sponsors in the near term and continued focus on AI, financial sponsors should be ready to capitalize on AI-related opportunities. To do so, financial sponsors need to be cognizant of the unique set of considerations associated with AI investment and develop clear policies, procedures and guardrails surrounding such investments to mitigate risk and fully realize the potential of AI.

See the full 2025 Insights publication

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP