Some items on our site have recently moved. Visit our News Hub for selected articles, special reports, podcasts and other resources.
US banks’ use of artificial intelligence to catch fraud, assess creditworthiness, is focus of emerging Fed interest
12 January 2021 00:00 by Neil Roland
US regulators are starting to explore how to oversee banks’ use of artificial intelligence to prevent fraud such as identity theft and evaluate creditworthiness of potential borrowers, Federal Reserve Governor Lael Brainard said.
Machine learning can expand access to credit, especially among underserved consumers and businesses without traditional credit histories, but also might amplify discrimination by relying on historical data with embedded racial bias, she said in a speech.
As regulators, Brainard said, “we must understand the potential benefits and risks, and make clear our expectations for how the risks can be managed effectively by banks.”
She continued: “To that end, we are exploring whether additional supervisory clarity is needed to facilitate responsible adoption of AI.”
The Fed and other banking regulators may issue a formal “request for information” on the risk management of AI applications in financial services, Brainard said.
A request for information, a first step in the formal regulatory process, can precede a rule proposal by at least months.
In the central bank’s first solicitation of more informal public input, the Fed is conducting a symposium today with academicians.
The Fed also wants to hear from banks, technology companies, consumer advocates, civil rights groups, and merchants, among others, Brainard said.
— Bank security —
Banks are becoming increasingly interested in AI to try to increase security by monitoring, detecting and preventing identity theft, impostor scams and other fraud.
AI-based tools can be more useful as financial services become more digitized and move to web-based platforms.
These tools can go through massive amounts of data to identify suspicious activity with greater speed and accuracy so firms can potentially respond promptly.
Identity theft, impostor scams and other fraud cost consumers well over $1.9 billion in 2019, the Federal Trade Commission said.
— Credit evaluations —
Machine-learning models are also being used to analyze credit decision-making and credit risk analysis data to gain insights that may not be available from traditional methods.
These models can assess the creditworthiness of consumers who lack traditional credit histories.
About 26 million Americans don’t have a credit record, and another 19.4 million lack enough recent credit data to generate a credit score, according to the US Consumer Financial Protection Bureau.
Black and Latino consumers disproportionately fall into these categories.
While machine learning can increase these groups’ access to credit, “it is important to be keenly alert to potential risks around bias and inequitable outcomes,” said Brainard, the Fed’s sole Democrat.
AI models built on historical data or past decisions that reflect prejudice can exacerbate racial gaps in credit access.
“It is our collective responsibility to ensure that as we innovate, we build appropriate guardrails and protections to prevent such bias and ensure that AI is designed to promote equitable outcomes,” she said.
05 September 2022 11:41 by Fiona MaxwellUK financial regulators face an uphill battle to maintain their independence with the announcement of Britain’s new Prime Minister
01 August 2022 13:32 by Fiona MaxwellThe increasingly political debate over insurance capital cuts in the UK post-Brexit is set to become a top-agenda item
12 July 2022 22:12 by Neil Roland18 potentially risky single-stock exchange-traded funds has elicited unusual public criticism by a Democratic US SEC member.