For most people, absently swiping through an Instagram feed, or flipping a light switch, or going to a doctor’s office don’t register as important events.
But for people with mobility issues, visual impairments or a host of medical concerns, those activities aren’t always simple, and can actually form huge quality of life roadblocks. For those customers, the advent of voice-controlled navigation aided by AI-powered voice assistants is about a lot more than adding convenience or fighting irritating frictions.
Instead, the technologies, even in their earliest development phases, look to have the potential to be a bulldozer to clear those roadblocks for millions of people nationwide.
The challenge for the various larger players in the voice assistant ecosystem is in meeting that demand – but it is a challenge that they seem enthusiastic to meet, particularly in the context of a larger move on the health and wellness sector as a whole.
Making Devices Accessible
The biggest news of the week when it comes to developing this specialty segment of the voice ecosystem comes out of the Google Accessibility team this week, which is working directly with those in the disability community to develop Voice Access.
Tied directly to the Google Voice assistant, the accessibility service is designed to make it easier for users to navigate a wider range of tasks via voice command instead of manual actions. It accomplishes that by letting users essentially translates a button push, page scroll or item selection into a voice command that Google Assistant can easily follow.
The navigability is basically achieved by giving each screen element a numeric assignment, such as “Click 8” or “Open 12.”
The app, according to early reviews, needs some refining and works far better in some applications than others. The program starts and stops easily – displaying the numeric targets on-screen only when being accessed through voice commands – and the numbers disappear when a user touches the screen and stay gone until the Google Assistant is woken again.
“After using this product for probably about 10 seconds, I think I’m falling in love with it,” Google Access early tester Stefanie Putnam noted. “You use your voice and you’re able to access the world. It has become a huge staple in my life.”
Putnam is a quadriplegic and a para-equestrian driver who had previously found tasks like taking photos, sending texts and composing emails “daunting.” According to Google, the service also has particularly strong applications for “individuals with Parkinson’s disease, multiple sclerosis, arthritis, spinal cord injury and more … Voice Access can also provide value to people who don’t have a disability — people juggling with groceries or in the middle of cooking.”
And while Google this week was working to build access to devices for patients (and everyone else), Amazon was also focusing on access, but of a different kind. Alexa is hoping to help more patients get to the doctor or make it easier to use technology to bring the doctor to the patient.
Connecting Patients to Care
Americans on average do not go to the doctor as often as they ought to, and according to reports, that tendency to under-solicit care costs the U.S. economy more than $150 billion per year.
But as of now, Amazon’s Alexa smart assistant is integrating with MediSprout, a health communications startup, to launch a platform for scheduling appointments.
The platform works by tapping into the MediSprout V2MD telehealth platform, which will now allow patients to access physicians’ calendars, book appointments and add those appointments both to their calendars and their doctors’ calendars via Alexa.
The system is also designed to make it easier to schedule follow-up appointments and prescription refills, as well as give patients a way to ask general health questions directly to their doctor.
“Major medical institutions and physicians are rapidly embracing advancements in telehealth that improve the quality of care by providing greater connections and flexibility in visits,” said Samant Virk, MD, founder and CEO of MediSprout. “V2MD by MediSprout is easy to implement on the physician side and widely adopted on the patient side, because it is simple to use and opens up a host of possibilities for different ways for patients to access care and for physicians to provide care going forward.”
Among those possibilities, according to Virk, is using Alexa devices with integrated cameras and screens as potential tools to allow patients to video appointments with their doctors – though he did not say when that option will be available to the general public.
This move follows rumors that emerged earlier this year that as part of its larger ambitions to disrupt the healthcare market, Amazon has built a team to focus on making its Alexa voice assistant more useful in the healthcare field.
According to an internal document obtained by CNBC, the division is called “health & wellness,” includes over a dozen people and is being led by Amazon veteran Rachel Jiang. Sources say that a key task of the team will be working through regulations and data privacy requirements laid out by HIPAA (the Health Insurance Portability and Accountability Act). The group is focusing on areas like diabetes management, care for mothers and infants and aging.
Amazon, thus far, has offered no official comments on those plans.
But earlier this year, the company announced a partnership with Berkshire Hathaway and JPMorgan Chase to create an independent company that will aim to fix the nation’s healthcare system, and last year they announced a partnership with drug manufacturer Merck for a competition where developers built Alexa “skills” to help people with diabetes manage their care. There have also been reports that the eCommerce giant is looking to get into pharmaceuticals.
And beyond what both firms have been announcing individually of late – they have also been making coordinated moves that suggest that their intention to make sure voice has a part to play in the future of healthcare actions.
Joint Efforts
Last week, Amazon’s Alexa Fund and the Google Assistant Investment Program both announced investments in the same firm: a voice assistant developed specifically for patient care called Aiva.
Aiva specializes in making hands-free communication between patients and caregivers more seamless in two ways. It is designed to get to know its users and to meet certain types of requests for things like entertainment, information or reminders for calendar items. In that regard, it functions very much like Alexa or the Google Home Assistant.
Aiva is also programmed, however, to move more complex requests from a user directly to a care provider. Caregivers, meanwhile, can get alerts and requests sent directly to their phones, and can use the app to communicate with the patient.
Originally designed for use with in-patient care facilitates, the system now integrates with both Amazon Echo and Google Home devices, and has begun expanding into patients’ homes, and doctors’ offices.
The specific amount of the investment has not been disclosed. While this is the first time the Google Assistant Investments program has invested in healthcare, GV (formerly Google Ventures) is a long-time investor in the space.
“Google Assistant is already giving millions of people easier access to information, more entertainment options and better control over their environment,” Ilya Gelfenbeyn, lead of Google Assistant Investments, said in a statement. “It’s exciting to see Aiva pushing the technology even further, using voice to improve vital human interactions, like caregiving.”
And increasing numbers of consumers are catching on to all the ways voice technology can make their lives easier – at work, at home and at all the various touchpoints in between. But more quietly, voice technology is also developing even bigger things for some consumers and patients – both by making it easier to get the right access to outside care when they need it and also to access a better toolbox, so they can provide more self-care.
It is not quite as eye-catching, but probably necessary, considering that mobility problems are the leading cause of disability among citizens over the age of 65, according to the Census Bureau. The same survey also noted that about 40 percent of that demographic reports having at least one disability – and over the next few decades, much more of the consumer population will be over 65. Today, there are about 46 million in that demographic, and by the year 2060, that figure will double to over 98. People in that age group will make up 24 percent of the population, up from the 15 percent they represent today.
Accessibility – and the ability to use a mobile device or voice-activated speaker – as part of one’s healthcare management system may be a niche segment of the voice ecosystem today. But as use cases keep proliferating, it is beginning to look like voice-centric health and wellness might someday stand apart as a competitive ecosystem all its own.
And that someday might be sooner rather than later.