Some items on our site have recently moved. Visit our News Hub for selected articles, special reports, podcasts and other resources.
US banks’ use of artificial intelligence to catch fraud, assess creditworthiness, is focus of emerging Fed interest
12 Jan 2021 12:00 am by Neil Roland
US regulators are starting to explore how to oversee banks’ use of artificial intelligence to prevent fraud such as identity theft and evaluate creditworthiness of potential borrowers, Federal Reserve Governor Lael Brainard said.
Machine learning can expand access to credit, especially among underserved consumers and businesses without traditional credit histories, but also might amplify discrimination by relying on historical data with embedded racial bias, she said in a speech.
As regulators, Brainard said, “we must understand the potential benefits and risks, and make clear our expectations for how the risks can be managed effectively by banks.”
She continued: “To that end, we are exploring whether additional supervisory clarity is needed to facilitate responsible adoption of AI.”
The Fed and other banking regulators may issue a formal “request for information” on the risk management of AI applications in financial services, Brainard said.
A request for information, a first step in the formal regulatory process, can precede a rule proposal by at least months.
In the central bank’s first solicitation of more informal public input, the Fed is conducting a symposium today with academicians.
The Fed also wants to hear from banks, technology companies, consumer advocates, civil rights groups, and merchants, among others, Brainard said.
— Bank security —
Banks are becoming increasingly interested in AI to try to increase security by monitoring, detecting and preventing identity theft, impostor scams and other fraud.
AI-based tools can be more useful as financial services become more digitized and move to web-based platforms.
These tools can go through massive amounts of data to identify suspicious activity with greater speed and accuracy so firms can potentially respond promptly.
Identity theft, impostor scams and other fraud cost consumers well over $1.9 billion in 2019, the Federal Trade Commission said.
— Credit evaluations —
Machine-learning models are also being used to analyze credit decision-making and credit risk analysis data to gain insights that may not be available from traditional methods.
These models can assess the creditworthiness of consumers who lack traditional credit histories.
About 26 million Americans don’t have a credit record, and another 19.4 million lack enough recent credit data to generate a credit score, according to the US Consumer Financial Protection Bureau.
Black and Latino consumers disproportionately fall into these categories.
While machine learning can increase these groups’ access to credit, “it is important to be keenly alert to potential risks around bias and inequitable outcomes,” said Brainard, the Fed’s sole Democrat.
AI models built on historical data or past decisions that reflect prejudice can exacerbate racial gaps in credit access.
“It is our collective responsibility to ensure that as we innovate, we build appropriate guardrails and protections to prevent such bias and ensure that AI is designed to promote equitable outcomes,” she said.
25 Feb 2021 12:00 am by Neil RolandUS bank syndicates’ high-interest loans to heavily indebted companies pose credit risks that are “high and increasing,” banking regulators said.
03 Feb 2021 12:00 amChina’s finance regulators are dealing with antitrust-enforcement issues after the country’s leaders pledged to prevent the expansion of capital in the economy.
Wall Street group rips US exchanges’ proposal to cap their Consolidated Audit Trail liability for data breaches03 Feb 2021 12:00 am by Neil RolandA Wall Street banking group said it was “fundamentally unfair” of the US stock and options exchanges to propose limiting their liability for data breaches.