Openstream.ai Bridges Human-Machine Conversations With Next-Gen Voice Agents

conversational AI, chatbots, artificial intelligence

In the rapidly evolving landscape of conversational artificial intelligence (AI), companies are vying to deliver solutions that can effectively bridge the gap between human and machine interactions. 

Openstream.ai’s newly patented software aims to help businesses interact through its Enterprise Virtual Assistant (EVA) platform. The company said EVA enables the creation of AI avatars, virtual assistants and voice agents that can engage in human-like conversations without the need for back-end complexity, scripts, or hallucinations.

The conversational AI market is highly competitive, with players like IBMMicrosoftGoogleAmazon and Nuance Communications offering their own solutions. Many companies rely on generative AI (GenAI) for such interactions, which can be prone to hallucinations and have limited ability to handle complex interactions. However, Openstream’s neuro-symbolic approach combines the power of large language models (LLMs) with symbolic AI planning and reasoning. 

“Our unscripted approach to dialogue management is different from other vendors in that we do not rely on a designed dialogue script,” Magnus Revang, chief product officer at Openstream told PYMNTS. “By combining the ability to understand these actions, conditions and entities in dialogue (that’s the neuro part using LLMs) with the ability to reason and plan over these (that’s the symbolic part), we get extremely powerful capabilities.” 

More Empathetic AI

Openstream recently patented an approach that aims to enhance virtual agents’ human-like understanding and responses, contributing to an improved user experience. 

“By having secondary modalities that display emotions, formal statements can be delivered with smiles, kindness, and/or compassion — which helps to ‘take the edge off,’” he said. 

Revang said that most companies manage conversations by looking for the purpose of what a person says and fitting it into specific categories they’ve set up.

“So, if I say “I want to order a cheese pizza,” it will likely map to the “order food” intent with the slot for “dish” filled by “pizza” and slot for “type” filled by “cheese,” he said. “It then maps these intents/slots into pre-scripted templated responses.”

Revang said that brands can be perceived as more empathic by responding with appropriate emotions, yet to react with the correct emotion, the model needs to know, or at least evaluate the user’s current emotions and personality traits. 

“When the user says, ‘My car hit a tree,’ the system will recognize that as a negative emotional state and will first ask the user if everyone is okay and nobody is injured,” Revang said. For example, the software might say, “Oh my God, very sorry to hear that; I hope no one is injured and you are okay,’ instead of jumping to a more regimented response: ‘Would you like to file a claim?’” 

Openstream is one of many companies working on conversational AI. Voice interaction is a seamless method for users to interact with and even transact with AI platforms. According to PYMNTS Intelligence, voice assistants are now used by 86 million U.S. consumers monthly, with nearly one-third of U.S. millennials using voice assistants for bill payments.

This integration of voice into daily tasks highlights its convenience. However, the ease of use does not guarantee perfection. Challenges persist in the realm of voice AI, extending beyond achieving natural conversation. Concerns regarding privacy, security and the ethical handling of voice data are prominent issues that need resolution.

Keyvan Mohajer, CEO and co-founder of conversational intelligence platform SoundHound, reflected in an interview last year with PYMNTS on the early days of voice AI. Users initially sought to engage in expansive, science fiction-like dialogues with their devices, only to find the technology limited to basic functions like playing music, setting timers and providing weather updates. This gap between expectation and reality led to a tempered expectation of voice AI’s potential.

Despite these hurdles, PYMNTS Intelligence has found a significant interest in voice technology, with 63% of consumers indicating they would use voice technology if it matched human capabilities. Furthermore, 58% stated they would opt for voice technology for its ease and convenience over manual tasks, while 54% preferred it for its speed compared to typing or touchscreens.

The Future of Conversational AI

While the company does not project trends as a policy, Revang said they “strive to achieve human-level understanding with human-like interaction.”

Openstream focuses on several key areas to enhance the quality of conversational experiences and relationships its clients can foster using their platform.

“We are constantly working to improve our AI avatars and give them the full fidelity of human emotions and genuine personalities,” Revang said. The company employs various techniques to avoid the uncanny valley while emulating human expressiveness.

The company is also working to reduce the pesky problem of LLMs that generate hallucinations

“We are currently using LLMs extensively where hallucinations can be contained,” Revang said. “We are rolling out our own model variants and fine-tuning datasets to further increase our ability to turn the chaotic world of humans into our symbolic representation for planning and reasoning.”

The firm is developing tools for constructing dialogue domains without coding. The goal is to democratize the creation of dialogue domains using their neuro-symbolic dialogue manager.

“We want any business user to be able to design extremely complex dialogue domains — by using visual tools and ingesting existing business documents,” Revang said.