As noted earlier, financial services regulators are increasingly focusing on how companies use artificial intelligence (AI) and machine learning (ML) in the underwriting and pricing of consumer credit products. . Although algorithms provide opportunities for financial services firms to offer innovative products that expand access to credit, some regulators have expressed concern that the complexity of AI/ML technology, particularly so-called of “black box”, could perpetuate disparate results. Companies that use AI/ML for loan underwriting and pricing therefore need to have a strong Fair Lending Compliance program and be prepared to explain how their models work.
Regulators increase pressure
In recent months, the Consumer Financial Protection Bureau (CFPB) has issued a series of public statements indicating that it is closely monitoring how companies use AI/ML in credit decision-making. For example, at an October 2021 press conference, director Rohit Chopra said, “While number-crunching machines may seem capable of removing human bias from the equation, that’s not what is happening”. In November 2021, Deputy Director Zixta Martinez commented, “we also know the dangers that technology can foster, such as black box algorithms perpetuating… discrimination in mortgage underwriting.”
The Equal Credit Opportunity Act (ECOA) prohibits creditors from discriminating on the basis of certain prohibited characteristics (eg, race, religion, marital status). In addition, under Regulation B, the implementing regulation of the ECOA, a lender must provide a statement of specific reasons when taking adverse action against a loan applicant (for example, when a lender decides to do not grant a loan). In a July 2020 blog post, the CFPB acknowledged that the use of AI/ML may present some challenges in providing specific adverse actions, and attempted to reduce regulatory uncertainty by outlining examples of flexibility in under Reg B’s adverse action requirements.
However, on May 26, 2022, the CFPB issued a compliance circular that reversed comments from the 2020 blog post and will complicate creditors’ use of AI/ML models.
Specifically, in Circular 2022-03, the CFPB states that “ECOA and Regulation B do not allow creditors to use complex algorithms, which means they cannot provide the specific and precise reasons for unfavorable actions”. The circular further states that:[c]publishers who use complex algorithms, including artificial intelligence or machine learning, in any aspect of their credit decisions must always provide a notice that discloses the specific primary reasons for taking an adverse action. After the Circular is published, companies that make credit decisions based on complex models that make it difficult or impossible to accurately identify specific reasons for adverse action – so-called “black box” algorithms – will be at subject to review by the CFPB, and may be subject to regulatory or enforcement action.
CFPB removes letter protections without action
The CFPB previously released three policies aimed at promoting innovation, facilitating compliance, and providing increased regulatory certainty to companies offering innovative products, including a No Action Letter (NAL) policy. An NAL provides a form of protection for companies regulated by the CFPB: in exchange for the CFPB’s ability to examine a company’s practices, the company would be shielded from certain oversight or enforcement actions. The CFPB has issued a total of six approvals to various market participants, including major banks, fintech startups and housing advisory agencies, including a 2017 NAL for a credit program involving underwriting algorithms.
However, on May 24, the CFPB announced that it was reorganizing the office that issued the NAL letters after determining that the policies and NAL letters were “ineffective”. And on June 8, 2022, the CFPB announced that it was terminating, at the request of a company, an NAL that involved the company’s use of AI/ML models in credit decisions. The CFPB order indicates that the company planned to make significant changes to its underwriting and pricing model, and the company eventually asked the CFPB to terminate the NAL so that it could make these changes without waiting for approval. of the CFPB. Following the termination of NAL, the company reaffirmed that it remains committed to fair lending. However, termination of NAL means that the company will no longer benefit from the limited protection offered by participation in the NAL program.
Build an internal AI compliance program before regulators call you
The use of AI/ML and algorithms is clearly a growing concern for regulators. As such, companies must establish and maintain strong internal compliance programs in conjunction with the development of AI programs. In particular, business, legal, and risk management teams need to understand how an algorithm is developed, what training data is used, and how an algorithm evolves over time. Mathematical modeling tools can be used to “verify the work” of an algorithm (not to mention provide information that can be included in required consumer notices). Models can also approximate what the outputs should be; therefore, if an algorithm’s actual results are significantly different, companies should conduct additional studies to ensure that the algorithm has not developed unintended biases.
Scrambling to gather information only when a regulator calls carries significant risks – how an algorithm actually works is only one piece of the puzzle; how an algorithm was developed and trained is also vital information that takes time and effort to document. For this reason, companies should improve their compliance programs now, so that they are in a better position to respond to any regulatory inquiries and avoid possible enforcement actions.
[View source.]