Banks have a Goldilocks problem: Authentication processes can’t be too rigorous for legitimate consumers, but also can’t be too lax so that fraudsters can easily exploit them. In the new Digital Fraud Tracker, Andrew Sloper, Chase’s head of digital authentication, tells PYMNTS how machine learning tools provide a layered, preventative approach without sacrificing a seamless user experience.
Consumers want their digital banking experiences to do more than just provide security, and processes that are not seamless could frustrate them into seeking alternatives.
This means that the financial services market is facing a Goldilocks conundrum. Authentication measures cannot be so rigorous that they alienate legitimate customers, but they also cannot be so lax that bad actors gain access with ease. The balance has to be just right.
Banks are employing artificial intelligence (AI) and machine learning (ML) tools to strike that balance.
Recent data found that 77 percent of banks are already putting AI solutions to use, Chase among them. It is embracing AI and ML to help customers conduct business while preventing fraudsters from making off with data or financial assets. Andrew Sloper, head of digital identity and authentication at Chase, recently spoke to PYMNTS about how these solutions allow the bank to deliver seamless and secure user experiences while enabling a preventative approach to fraud.
“What we aim to do as a bank is keep in mind that our main focus is to protect customers, their data and their money and deliver a digital experience,” Sloper said.
A Layered Approach to Security
Chase’s authentication approach involves multiple levels of security that keep its operations safe, Sloper explained, and includes solutions that monitor for activities like bot-based, dedicated denial of service (DDoS) and malware attacks that might compromise devices or sessions. The bank also uses two-factor authentication (2FA) technology — such as temporary passcodes and cryptographic tokens on mobile devices — to ensure that legitimate users are who they claim to be.
“We never rely singly on one of our control layers,” he said. “Those controls span our perimeter defenses.”
The data collected from these layers of protection is fed into Chase’s underlying AI and ML systems, which review and find important patterns in the information. These tools can build more informed customer profiles and effectively determine if users are legitimate or fraudulent based on their past behaviors. Sloper noted that having a better understanding of customers’ behaviors helps the bank provide better services.
“The heart of our approach is very much a customer-centric approach to authentication [in which we] profile the data that is used to distinguish between a customer’s typical good behavior and a fraudster’s bad or suspicious behavior,” Sloper said, adding that such insights can help Chase determine the right balance of security and seamlessness.
The bank must also ensure that customers can access their accounts as seamlessly as possible. Authentication measures can become more demanding as customers’ activities become riskier, although many customers typically engage in low-risk activities that may only require 2FA, like checking their account balances. More complicated transactions, such as international wire transfers, require additional checks like “just-in-time authentications” that ask customers to meet additional criteria before transactions go through.
“It’s looking at all our experiences to make sure [they] hit the sweet spot for the customers,” Sloper said. “If there’s too much friction, then it’s a pain, and they’re not going to use the service. If there’s too little friction, they won’t trust us, and they won’t use the service.”
The ML Triple Threat
Keeping the Chase platform trustworthy and seamless means the bank must constantly monitor its transactions to stay vigilant against fraudsters. It uses ML solutions to detect fraud and highlight suspicious activities. ML solutions have three main functions at Chase, Sloper explained, the first of which is supervised ML. These tools review data and take actions based on specific patterns.
“With supervised machine learning, we say, ‘If we spot fraud, can we correlate that back to the data if it follows a similar pattern?’” he said.
Chase also uses unsupervised ML to review, collect and analyze data for unusual behaviors that might denote new fraud attempts.
“The unsupervised model looks at the data coming in and says, ‘We are detecting new patterns that leads us to conclude that this pattern of activity is suspicious,’” Sloper said. “This could be a suspiciously fast transaction being set up or unusual navigation through our site that prompts something to [be investigated].”
The third and final way Chase uses ML is to enact changes based on the collected data. Having a clearer understanding of new and emerging fraud threats helps banks understand which actions are necessary to respond to them.
“It acts as a way to recommend additional rules or safeguards based on the data we’re seeing,” he said.
ML is doing more than helping Chase pinpoint potential acts of fraud, Sloper said. These tools are also enabling the bank to take a preventative approach to fighting fraud by analyzing data in real time and finding activities that could point to more serious fraud threats. This reverses the traditional model of responding to fraud after it is detected.
“It essentially allows us to detect patterns as they emerge, even ahead of fraud being committed,” he said. “It helps [us] to take a more proactive approach.”
Having these solutions in place helps the bank better understand each customers’ risk levels and the most appropriate authentication solutions. Customers can face less friction as they access their bank information, while the bank can feel confident that potential fraudsters have been detected. Preventative fraud approaches like these could be the key to establishing the trust that both banks and customers need to succeed.