Brandy Wood, vice president and head of client experience products at Carat from Fiserv, told PYMNTS that despite all the headlines and controversies swirling around artificial intelligence, the concept is nothing new.
“It’s been around since the 1950s,” she said in an interview conducted as part of the “What’s Next in Payments: Payments and GenAI” series. Back then, there was initial documentation done on the mathematical feasibility of AI. What hindered its development, perhaps not surprisingly, was the limitations of the technology of the time. Computers were limited and prohibitively expensive.
“But now technology is in a place where it’s accessible to the masses, and there’s an open-source community that can experiment” with AI and emerging use cases, she said.
With widespread adoption and accessibility, AI has been able to leap from theory to real-world action, as large sets of data and content can be harnessed across any number of verticals, she said. The models can take information that has been ingested — that hasn’t historically been seen by those models — and create new content, spanning text and images.
Within financial services and payments, natural language chat and chat-based functionality are finding wide use, as customers interact with businesses and banks (and their call centers) to find out about their accounts, products and services, Wood said. Chat functions are also being used within businesses themselves to train employees and improve call center performance.
Data-based applications are being created and used by payments companies to improve decision-making and especially fraud mitigation.
“Fraud is a pain point for the payments industry,” she contended, adding that the fraudsters are getting more sophisticated on a seemingly daily basis.
But payment providers are getting more sophisticated too. For example, technology is in place to take advantage of transactional datasets, along with AI, to make better fraud decisions in real time.
Fiserv (and Wood’s team in particular) has been focused on synthesizing a “single control center” for the company’s corporate, merchant and bank clients, tied to large language models (LLMs), geared toward helping client firms battle fraudsters while simultaneously providing a better experience to end users and recommending products and services in real time and in context.
“These business users do not have to be as technically intimate with the products and solutions, but they are still able to extract the data and information that they need to run their day-to-day businesses,” she said.
Collecting and analyzing that data demands new rules and regulations, said Wood, who maintained that there needs to be “a balance between innovation and governance.”
Fiserv, for its part, has developed a responsible AI governance model to ensure that — as it powers new use cases — the company evaluates the datasets that are being used and reviewed, and that data privacy regulations are reviewed before using information in financial services activities.
Looking ahead, Wood said, LLM and “small language models” will be used in tandem to improve the quality of the output and the datasets themselves. For the businesses seeking to bring AI fully into lending, payments and banking, automating processes and client servicing, “our role is to help them guide them through challenges — and where AI can help them.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.