CFPB Comments on AI Offer Insights for Consumer Finance Industry

Skadden Publication / AI Insights

Darren M. Welch Stuart D. Levi

On August 12, 2024, the Consumer Financial Protection Bureau (CFPB or Bureau) provided comments on the use of artificial intelligence (AI) in the financial services sector that are among its most extensive regarding risks and expectations surrounding the use of AI and the CFPB’s approach to regulating AI going forward.

The Bureau’s comments were in response to a Department of the Treasury request for information (RFI). The Bureau’s comments stress that existing laws apply fully to uses of AI, and it will continue to assess AI uses for compliance with those laws, including fair lending laws.

Specific AI uses that the Bureau identifies as presenting potential compliance risk include automated customer service processes such as chatbots, fraud detection models and loan origination.

We summarize below Treasury’s RFI, describe key aspects of the Bureau’s comments and offer takeaways for participants in the consumer financial services industries.

Treasury’s Request for Information

On June 6, 2024, Treasury released its “Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector.” In issuing the RFI, Treasury stated that it was seeking to increase its understanding of AI use in the financial services sector, including:

  • “Potential obstacles for facilitating responsible use of AI within financial institutions.”
  • “The extent of impact on consumers, investors, financial institutions, businesses, regulators, end-users, and any other entity impacted by financial institutions’ use of AI.”
  • “Recommendations for enhancements to legislative, regulatory, and supervisory frameworks applicable to AI in financial services.”

The RFI includes 19 questions that address a wide range of topics including:

  • How to define AI.
  • Uses and benefits of AI.
  • Challenges that AI presents (including the demand for consumer data and related data privacy considerations).
  • Fair lending and other consumer compliance issues.
  • Issues that small financial institutions face regarding AI.
  • AI risk management.
  • Third-party oversight.
  • Fraud and illicit finance risks.
  • Recommendations for actions that Treasury can take to promote the responsible use of AI and protect consumers and financial institutions.

A key focus of the RFI is balancing the potential for AI to promote inclusiveness and the risk that AI may exacerbate bias and fair lending — also core concerns of the CFPB.

The CFPB’s Response

The CFPB’s comments on the RFI (the Comment) are organized around two core points:

  1. A number of existing laws already apply to the use of AI by financial institutions.
  2. Regulation of the financial services sector, including regulation of AI, should foster competition by creating a level playing field, rather than giving special treatment to particular institutions.

Existing laws apply to AI. The Comment notes that there are no exceptions to the federal consumer financial protection laws for new technologies. To the contrary, regulators are required to apply existing rules to such new technologies. In that regard, the Comment lists a number of CFPB publications and guidance documents regarding consumer protection issues that may be implicated by the use of AI, including:

  • Chatbots. Chatbots and other automated customer service technologies built on large language models may: (i) provide inaccurate information and increase risk of unfair, deceptive, and abusive acts and practices in violation of the Consumer Financial Protection Act (CFPA); (ii) fail to recognize when consumers invoke statutory rights under Regulation E and Regulation Z; and (iii) and raise privacy and security risks, resulting in increased compliance risk for institutions.
  • Discrimination. A central focus of the CFPB’s Comment is the prohibition against discrimination and the requirement to provide consumers with information regarding adverse action taken against them, as is already required pursuant to the Equal Credit Opportunity Act (ECOA). The Comment notes that courts have already held that an institution’s decision to use algorithmic, machine-learning or other types of automated decision-making tools can itself be a policy that produces bias under the disparate impact theory of liability.

    The Bureau makes clear in the Comment that it will continue to closely monitor financial institutions’ fair lending testing protocols, including those relating to “complex models.” Such testing should include regular testing for “disparate treatment and disparate impact,” and consideration of less discriminatory alternatives using manual or automated techniques.
  • Fraud screening. The Comment stresses that the use of fraud screening tools, such as those offered by third-party vendors that generate fraud risk services, must be offered in compliance with ECOA and the CFPA. In addition, the Comment states that because such screening is often used to assess creditworthiness (i.e., by determining who gets “offered or approved for a financial product”), institutions that compile and provide such information are likely “subject to the requirements of the Fair Credit Reporting Act.”

Regulation should foster competition through a level playing field. The second key point of the Comment is that uniform enforcement of rules by regulators serves to foster innovation, since firms are incentivized to invest in innovative products and services that benefit consumers rather than circumvent the rules. According to the Comment, with respect to AI, this means ensuring that regulation does not stifle competition in pricing or favor incumbents, that there is consistent treatment under the law for similar products and services, and that regulators combat anticompetitive practices and monitor the market to ensure accountability.

Takeaways and Recommendations

Since the CFPB and many other federal financial regulators have not issued or proposed comprehensive regulations addressing AI specifically, publications such as the CFPB Comment provide key insights into the Bureau’s priorities and potential future supervisory, enforcement and actions regarding AI.

One clear takeaway, particularly since the Bureau did not propose any new rules or guidance governing AI, is that the CFPB intends to rely on existing laws and regulations to regulate AI. Accordingly, financial institutions would be well advised to assess their use of AI for compliance with current laws and regulations, especially with respect to the specific laws cited in the Comment discussed above.

The Comment also makes clear that assessing potential discriminatory effects resulting from the use of AI is a top priority for the CFPB. The Comment repeatedly stresses the need for robust fair lending compliance risk management, with a focus on quantitative fair lending testing to assess disparate impact risk resulting from models built using AI.

Under the disparate impact framework established through regulations and case law, if a policy or practice adversely affects individuals on a prohibited basis such as a race or ethnicity, that policy or practice may result in an illegal disparate impact if there is no legitimate business justification for the practice or if there is a less discriminatory alternative (LDA) for the practice that services the institution’s business needs.1

And while the Comment stresses the importance of assessing potential LDAs, it leaves unanswered many questions about how to do so. For example, the Comment states that the CFPB “will continue to explore the use of automated debiasing methodologies” in identifying potential underwriting model LDAs, but it does not address whether the use of such advanced methodologies could elevate disparate treatment risk by using prohibited factors in model development. Nor does the Comment address the standard for whether an alternative practice that reduces disparities continues to serve the lender’s legitimate business interest.

In light of the Comment, financial institutions should consider assessing their fair lending testing practices, including methods for assessing potential LDAs for models developed using AI. The Comment also notes that fair lending concerns can arise not only in connection with underwriting models but also in models used in post-origination activity such as servicing and loss mitigation, and potentially in fraud detection models as well.

Accordingly, institutions should think broadly when assessing practices that may present fair lending risk and warrant fair lending testing.

How AI can be used to discriminate against individuals is also a focus of the recently enacted Colorado Artificial Intelligence Act. That act, which goes into effect in February 2026, is primarily focused on AI systems used to make a “consequential decision” involving areas such as financial services. It is designed to protect against algorithmic discrimination — namely unlawful differential treatment that disfavors an individual or group on the basis of protected characteristics.

We will continue to monitor developments in this area.

_______________

1 See, e.g., Texas Dep’t. of Hous. and Community Aff. v. Inclusive Communities Project, Inc., 576 U.S. 519, 533-34 (2015); 12 C.F.R. § 1002.6(a); 12 C.F.R. Part 1002, Appendix I, para. 6(a), comment 2.

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP