Conduent Forms Generative AI Partnership With Microsoft

Microsoft Emphasizes Importance of GPU Supply for AI

Business solutions and services company Conduent launched an artificial intelligence-focused agreement with Microsoft.

The partnership, announced in a Monday (April 29) press release, will begin by exploring generative AI use for healthcare claims management, customer service and fraud detection.

“Generative AI has the power to transform how businesses and organizations operate — serving as a force-multiplier to improve efficiencies and enhance customer experiences across a range of industries,” Svetlana Reznik, general manager for data and AI at Microsoft, said in the release. “Our collaboration with Conduent will help accelerate AI adoption for their customers in a secure cloud environment.”

The companies are at work on three generative AI pilots, including one that involves intelligent data harvesting from healthcare claims documents for quicker resolution using Microsoft’s Azure AI Document Intelligence and Azure OpenAI Service, according to the release.

The companies also hope to use the Azure services to increase the volume and speed of fraud detection processing in payments and to improve customer service agent responsiveness, the release said.

Meanwhile, PYMNTS took a closer look Monday at AI’s use in the small business world and the ways in which the technology can have an effect.

“Enhancing customer service, streamlining productivity, strengthening data intelligence and real-time analysis, democratizing access to working capital and jumpstarting marketing and content creation are all ways in which AI has already started to impact the growth prospects of small businesses,” the report said.

While tech giants like Meta, Microsoft and Google are increasingly targeting enterprise organizations with their products, many are also bringing to market AI solutions aimed toward smaller operations.

For example, PYMNTS reported last week that Microsoft unveiled its smallest AI model yet, aimed at businesses with limited resources.

“Small language models have a lower probability of hallucinations, require less data (and less preprocessing), and are easier to integrate into enterprise legacy workflows,” Narayana Pappu, CEO at Zendata, a data security and privacy compliance solutions company, told PYMNTS. “Most companies keep 90% of their data private and don’t have enough resources to train large language models.”

Google, meanwhile, is working on ways to transform business tasks into more intelligent, automated processes.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.