Back in the olden days – circa 1950 and before – a customer who wanted a loan would walk into a bank and sit across a desk from a man in a blue suit with a red tie. It was, says ZestFinance founder and CEO Douglas Merrill, always a man. Always a blue suit. Always a red tie.
Blue Suit would listen to the customer’s pitch and decide whether to lend that person money. It was a very subjective process, Merrill said. If Blue Suit’s kids were on the soccer team with the applicant’s kids, then the banker would likely consider that person trustworthy and would agree to the loan, regardless of how likely the customer truly was to pay it back. Conversely, a customer whose kid didn’t play soccer with the banker’s might be overlooked, even if he was likelier to make good on his debt.
Merrill said this approach was fundamentally unfair, so along came Fair, Isaac and Company (now Corporation – better known as FICO) to square it up. It was founders Bill Fair and Earl Isaac who, in 1956, conceived the credit ranking system that is so widely known and trusted today.
Using the leading mathematics of the time, the automated FICO scores soon replaced Blue Suit – but Merrill said the logistics had their own weaknesses when it came to handling missing or erroneous data, which could lead to wrong scores being assigned.
It also meant that applicants who lacked a credit history could not prove their willingness or ability to pay – when, in fact, those customers may have been recent immigrants or college graduates with a working future that would support paying back their loans.
Merrill, who hails from Google, looked at the search engine’s complicated machine learning algorithms for identifying and compensating for errors in user data and websites, and wondered whether they could have a second life in the credit space. Thus, in 2009, ZestFinance was born.
Machine learning may not be the first thing people think about when considering how to improve the lending business, but Merrill is more than ready to contend otherwise in an upcoming webinar with PYMNTS titled, “How to Apply Machine Learning in your Lending Business (and Explain the Outcomes to Your Regulator).”
On April 24 at 1:00 p.m. EST, Merrill and Karen Webster will discuss how machine learning-based underwriting can help lenders approve more borrowers and significantly reduce defaults – yet only a small vanguard of lenders are using machine learning in their credit business.
The webinar will explore reasons that lenders hesitate to become early adopters, from the complexity of machine learning models to AI’s notorious “black box” problem, which makes it hard to explain machine learning-generated results to the regulators.
Merrill said lenders can integrate machine learning into existing workflows to start benefitting from it without doing a total rebuild of their business processes.
The algorithm’s ability to handle errors and missing data can help those lenders start safely approving applicants they previously would have rejected – applicants, Merrill noted, who have already found the company and applied for the loan.
That means the lender has already spent new customer acquisition dollars on that person. Rejecting the loan unnecessarily would mean wasting those dollars, Merrill said, while being able to approve it helps the business get its money’s worth on that acquisition.
The algorithm can also help identify applicants with falsely high ratings – effectively switching out bad candidates for good ones.
In Merrill’s experience, regulators have been cautiously interested in machine learning for underwriting; they simply lack the proper framework to understand the new math. That’s a common problem with AI, said Merrill. Machine learning is inherently a black box, he said, due to its complexity and the potential for tiny changes to have massive impacts on outcomes.
He said the onus is on modelers to be able to crack open the black box and explain how their model works. If they can’t, then financial service management teams would be foolhardy to sign on for such an unknown risk, Merrill said.
Challenges such as operations, monitoring and reporting are solved problems for other technologies, he said, but that’s not yet the case in the AI space. Modelers must take the time to consider these things, said Merrill, because that’s what chief risk officers will want to know when considering whether to use the technology at their organizations.
Risk management teams, as well as regulators, have legitimate reasons to demand a lens into the innerworkings of a machine learning system before they can trust it to handle lending decisions. Merrill said this means coming to a consensus about what kinds of explainability are acceptable to start driving momentum for this new and improved way of doing credit.
Tune into the webinar at 1:00 p.m. EST on April 24 to learn more.