America’s consumer protection watchdog says it’s monitoring the banking sector’s use of AI-powered chatbots.
The Consumer Financial Protection Bureau (CFPB) said Tuesday (June 6) that it had gotten a number of complaints from customers frustrated by their interactions with banks’ artificial intelligence (AI), used to answer questions or solve problems.
“To reduce costs, many financial institutions are integrating artificial intelligence technologies to steer people toward chatbots,” CFPB Director Rohit Chopra said in a news release. “A poorly deployed chatbot can lead to customer frustration, reduced trust, and even violations of the law.”
The release notes that roughly 37% of Americans interacted with a banking chatbot last year, a number that is projected to climb, with the top 10 commercial banks in the country using chatbots on some level.
The CFPB said its analysis of the issue found that banks run the risk of providing customers with inaccurate information or failing to protect consumer data and privacy, in violation of consumer financial protection laws.
“When chatbots provide inaccurate information regarding a consumer financial product or service, there is potential to cause considerable harm,” the release said.
“It could lead the consumer to select the wrong product or service that they need. There could also be an assessment of fees or other penalties should consumers receive inaccurate information on making payments.”
Last month, Chopra said the CFPB was intensifying its AI regulation efforts, saying the agency had “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists and others to make sure we can confront these challenges.”
Those challenges included mismanaged automated systems at banks that led to wrongful home foreclosures, car repossessions and lost benefit payments, all of which have triggered fines by the CFPB in the last year.
“One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,” Chopra told the Associated Press. “In other cases, we’re looking at how our fair lending laws are being adhered to when it comes to the use of all of this data.”
Meanwhile, PYMNTS spoke with i2c CEO and chairman Amir Wain earlier this week about the need for human oversight when AI is employed in the financial world.
He pointed to generative AI’s habit of “hallucinating,” or providing fabricated results, as a particular sore point for financial services firms.
“Based on the quality standard and compliance requirements in financial services, we’ve got to be careful how we use this technology in a compliant manner,” he said. “We cannot be at the bleeding edge of technology dealing with people’s money and funds … we need to put a compliant framework around the tool.”