Amazon aspires for Alexa to be an integral part of the consumer’s household.
During its product launch on Wednesday (Sept 20), Amazon unveiled enhancements to its Alexa voice assistant, introducing intuitive decision-making capabilities derived from natural conversation.
This is in response to the rising consumer demand for advanced voice technology in daily life, with the aim of making Alexa more human-like. In fact, 54% of consumers anticipate choosing voice technology over typing or using a touchscreen in the future, according to an April PYMNTS report titled “How Consumers Want to Live in the Voice Economy.”
And in the past year, about 27% of consumers have engaged with a voice-activated device or speaker, highlighting the growing integration of voice technology in everyday life. Moreover, 22% of Gen Z consumers have a strong willingness to invest over $10 per month for access to a voice assistant that offers intelligence and reliability on par with a human counterpart.
Confirming these results, Amazon announced during its Wednesday presentation that customers have linked almost a billion devices to Alexa, which encompasses Echo smart home gadgets, and they engage with these devices tens of millions of times per hour.
It’s not limited to just timers and music. The smart home sector has seen a 25% increase compared to last year, while information usage has surged by over 50% from the previous year, and shopping has seen a 35% year-over-year growth in customers. Amazon noted that Alexa has become an integral part of people’s lives, and millions of customers are optimistic that the current direction of the artificial intelligence (AI) industry will prioritize the application of generative AI to phones and web browsers.
“That makes a lot of sense because that’s where customers are, but to date, generative AI has been primarily focused on creators, not consumers,” said Dave Limp, Amazon’s SVP of devices and services during the company’s Livestream event on Wednesday. “But when you’re building an AI like this for the home, you have to think about it very, very differently. And it all starts with world-class devices that seamlessly fit into customer’s life.”
In pursuit of these goals, Amazon has been exploring generative AI techniques to enhance its ability to understand conversational phrases, offer appropriate responses, improve contextual comprehension, and effectively manage multiple requests within a single command.
Generative AI has appeared to be the most promising avenue for advancement for some time now. However, even though digital assistants have consistently featured AI elements, they have been lacking the intricate processing capabilities and the ability to engage in more human-like interactions that generative AI can offer.
The primary enhancement in the upcoming Alexa update is a highly conversational assistant capable of comprehending a broader range of verbal instructions, thereby reducing the need for precise terminology. This addresses a prevalent source of frustration experienced with voice assistants: the necessity to rephrase requests repeatedly, such as asking to lower the thermostat.
Amazon asserts that in contrast to ChatGPT, which possesses a knowledge cutoff around late 2021 or early 2022, the Alexa large language model (LLM) delivers up-to-the-minute information, delivers a more engaging conversational experience, and touts decreased latency when compared to previous iterations of Alexa.
During the event, Amazon said its Alexa LLM surpasses ChatGPT when used in web browsers or on mobile devices, providing users with practical, real-world applications like conversations about recipes, suggesting travel ideas, and even composing poems.
“What makes our LLM special is it doesn’t just tell you things, it does things,” said Rohit Prasad, Amazon’s SVP and head scientist of artificial general intelligence.
With the new Alexa, users can simply say things like, “Alexa, I’m feeling chilly,” and the assistant will promptly adjust the temperature on your linked thermostat. Likewise, users can instruct, “Alexa, transform this room to match the Seahawk colors,” and it will identify both the room the user is in, and the specific colors associated with the Seahawks.
The pivotal factor is the application programming interfaces (APIs).
Amazon said it has integrated a collection of over 200 smart home APIs into their LLM. This combination of data, along with Alexa’s knowledge of the devices in a user’s home and their location within a room, as determined by the Echo speaker in use, equips Alexa with the contextual awareness required for a proactive and smooth management of one’s smart home.
This contextual understanding goes beyond recognizing the user’s desire to control other connected devices. It includes the capability to infer changes in the home environment. For example, when a user introduces a new device into their home, they can simply say, “Alexa, activate the new light,” and the system will identify the newly added light source, eliminating any ambiguity.
Another feature involves Alexa’s ability to handle multiple requests simultaneously. This goes beyond the fundamental tasks it could already manage, albeit to a limited extent, like saying, “Alexa, turn off the lights and lock the door.”
At the outset, the ability to issue multiple commands will be limited to a specific set of device categories, such as lights, smart plugs, and a select few others, Limp said. Nevertheless, the development team is working to include compatibility with all device types.
The incorporation of generative AI into Amazon Alexa products has potential to open up new avenues for brands and retailers to enhance customer engagement, offer personalized experiences, and streamline operations.
As voice assistants like Alexa become more integrated into daily life, businesses that leverage generative AI technology will be better positioned to meet customer expectations and drive growth in an increasingly competitive marketplace.
For example, one of Amazon’s latest launches — the all-new Echo Frames and Carrera Smart Glasses collection — looks to seamlessly blend fashion with Alexa technology, offering users the convenience of calling on Alexa for various tasks, such as adding items to their shopping lists or controlling lighting.
Moreover, these glasses offer a range of premium features. For instance, users can play music with a simple tap or request Alexa’s assistance in locating misplaced eyewear.
There are seven options available, each offering a range of lens choices, including lenses with UV400 protection, prescription-ready options, or blue light lenses. They also have IPX4 resistance. Users can expect up to six hours of media playback or talk time, or up to 14 hours of usage on a full battery charge — representing up to 40% more audio playback and 80% more talk time compared to the previous generation.
Additionally, a novel feature allows for wireless charging of the glasses using a newly introduced charging stand, marking the first time this capability is available.