There’s never been a better time to talk about Voice.

Last year, Gartner predicted that by 2020 the average person would be having more conversations with virtual bots and assistants than with their spouse. A new report presented at the Cannes Festival in June adds some extra spice to this prediction, claiming that 26% of regular voice-tech users admit to having had a sexual fantasy about their voice assistant.

It's as if the premise of the 2013 movie Her (in which this exact scenario is played out between the lead character and a virtual assistant), has moved from science fiction to reality in less than three years.

An obvious jumping-off point for the movie was Siri, the personal assistant that Apple launched in 2011. But if you're falling in love with a voice assistant in 2017 then the object of your aural attraction is more likely to be Amazon’s Alexa.

Alexa is now the market leader for Voice and offers its users access to more than 10,000 separate skills (think of these as apps for voice) through Amazon’s Echo and Dot devices. As Amazon’s advertising puts it, Alexa offers: “information; music; news; weather; audiobooks; calendar; sports; traffic reports; shopping; connected home; and more. Just ask.”

Healthcare isn't called out specifically here – but this doesn’t mean patients aren’t asking for Voice solutions.

In the Spring of 2015, when Amazon’s Echo was only a few months old, carers began to write product reviews which testified to how the device was changing the lives of people with conditions such as Parkinson’s, autism and Alzheimer’s.

Interestingly, however, the functionality that stood out to these early adopters – such as managing reminders, or controlling home automation – wasn’t designed with healthcare use-cases in mind. But although this highlights a clear opportunity for Voice in improving patient engagement and support, you'd be surprised at how few skills have been developed to support condition-specific audiences or the needs of particular patient groups.

There are some exceptions, notably in diabetes. Night Scout is an open source tech project for sharing continuous glucose monitoring data with websites, smartphones and watches. It’s a testament to the ingenuity of committed patients and their carers. The Night Scout community has ported this functionality to voice (“Hey Alexa, ask Night Scout what's my blood glucose?”) but implementing this at home requires a certain level of technical competence and it’s not yet a public skill.

Diabetes is also where Pharma has made its first moves with Voice. Merck’s “Alexa Diabetes Challenge” offers technical and financial support to ideas that can deliver meaningful impact and support to people newly diagnosed with T2D. The competition received 96 entries from 82 different countries and just announced the five finalists in July1:

  • DiaBetty (University of Illinois): A virtual diabetes educator and at-home coach that is sensitive and responsive to a patient’s mood. It provides patients with context-dependent, mood sensitive, and emotionally aware education and guidance, enhancing patient skills for self-management.
  • My GluCoach (HCL America, Inc.): A holistic management solution, developed in partnership with Ayogo, that blends the roles of voice-based diabetes teacher, lifestyle coach, and personal assistant to serve the individual and specific needs of the patient. It leverages health pattern intelligence from sources such as patient conversations and wearable and medical devices.
  • PIA: Personal intelligent agents for type 2 diabetes, (Ejenta): A connected care intelligent agent that uses NASA-licensed AI technology integrated with IoT device data to encourage healthy habits, detect at-risk behaviors and abnormalities, and alert care teams.
  • Sugarpod, (Wellpepper): A multimodal solution that provides specialized voice, mobile, video, and web interactions to support patient adherence to comprehensive care plans. It offers education, tips, and tracking tools, including a smart foot scanner, which uses a classifier to identify potential abnormalities.
  • T2D2: Taming type 2 diabetes, together, (Elliot Mitchell, Biomedical Informatics PhD Student at Columbia University, and team): A virtual nutrition assistant that uses machine learning to provide in-the-moment personalized education and recommendations as well as meal planning and food and glucose logging. Its companion skill authorizes caregivers to connect with a patient’s account to easily engage from afar.


Where Pharma Can Make an Impact

Since identifying Voice as key opportunity in our 2017 Digital Trends Report at the start of the year we’ve been taking Voice thinking and prototypes into client workshops and focus groups – wherever we can get valuable feedback and learn more about the new expectations and behaviours that Voice drives.

For drug manufacturers and brand owners, we see three categories of patient-facing use-cases:

1. STANDARD ASSISTIVE USE-CASES.

As mentioned earlier, these capabilities — like managing reminders, or controlling home automation — come native to a Voice assistant, so there’s probably only minimal value (if any) in providing branded skills that duplicate them. There’s no need to create a skill to set a reminder or manage appointments when Echo does this by default, and can already integrate with cloud-based calendars from Google, Microsoft and Apple.

2. RESPONSIBLE PRODUCT-CENTRIC USE-CASES.

This is where pharma has the strongest right to play. Patients using a specific medication look to manufacturers to be the visible, accessible experts in how to get the best from their products.

Unfortunately, the information that patients need in order to take their medicines properly typically lives in small-printed, scientific, and often scary language within a Patient Information Leaflet (PIL) or Package Insert (PI). As anyone who’s ever tried to engage with a PIL/PI knows, the way that these are presented is anything but friendly. In fact, a recent report by the Academy of Medical Sciences described these leaflets as being likely to make people ‘unduly anxious about taking medicines’.

Voice and other conversational interfaces (Chabots, for example), can provide valuable examples of new approaches to making this type of information available in a more timely, relevant and digestible form – approaches that may help to reassure rather than lead to patient paranoia and non-compliance.

As an example, this first video below takes the example of a patient needing reassurance around how to self-administer an injectable therapy.

“Alexa, ask… for injection support.”


This video demonstrates a use-case where a patient needs reassurance and guidance around a self-injection procedure.

Not only does this act as a live 'walk-through' for the procedure, with the content easily accessible exactly when its needed, it also addresses one of the major problems we’ve found when working with users around Voice: remembering the names of multiple skills and how to invoke them can be challenging. Making access instructions prominent on the packaging that a patient will have in their hand resolves this problem.

Our second product-centric use-case goes beyond simple "call and response" and looks at how to integrate manufacturer content with Alexa’s ability to dial a pre-set support number.

This way, you're not just breaking down PIL material to more digestible bite-size chunks, you're helping provide triage to professional support for those patients who may need it.

“Alexa, ask… about a missed injection.”


3. PROPERLY PATIENT-CENTRIC USE-CASES.

The winning use-cases for pharma with Voice, however, might not be about the product at all, but about the patient: their progress; their personal response to their medications; and their outcomes on treatment.

It’s important to remember that voice interfaces aren’t just a point solution. Voice will offer a valuable interface into a wider ecosystem connected to a broader, deeper set of data and services - a "personal area network" for an individual’s health.

Inevitably, these services will be managed via artificially intelligent agents, and often in combination with other inputs and other devices such as wearables. For example, when movement sensors around the home detect unusual patterns, matched by unusual signals from a wristband tracking vital signs, Voice becomes the mechanism by which a patient can be notified that something may be wrong, and also becomes the channel to offer solutions and support.

This all ties into the potential for providing patients with personal, intelligent agents that can act as "virtual nursing assistants". This is a market currently estimated to be worth $20bn over the next 10 years.

We can’t claim to be all the way there with the prototyping we've been doing in this area but we've already been thinking about conversation as an approach to elicit patient reported outcomes measures — in the moment with natural language, and without the need to fill out forms.

What you'll see below explores how this could work using a standard outcomes scale for Rheumatoid Arthritis – a condition where using a standard mouse and keyboard can often be difficult, and where Voice may present greater convenience.

“Alexa, ask… to take my daily feedback.”


“Alexa, How Can I Get Started?”

If you’re inspired to explore Voice for your healthcare brand or services, then the right time could be now. But there are some watch-outs you should be aware of before you get started.

  • Institutional security standards: Amazon devices won’t play nicely (or indeed at all right now) with WPA2 enterprise Wi-Fi security standards. This means that that any potential use-cases you might be seeing for institutional uses of Voice – for example, in hospitals, clinics or care homes – may need to wait a while.
  • HIPAA compliance: Current devices don’t play nicely with HIPAA either. While we'll certainly see a future where HCPs have a virtual Voice assistant to take contextual notes during daily activities, or to fill in the painful fields of EHRs, we'll need to wait until the platform owners address this. (And they will – Amazon's B2B offerings go from strength to strength).
  • Voice isn’t universal (yet): William Gibson’s observation on the future --"it’s already here, it’s just not evenly distributed”-- is as true for Voice as it was for mobile ten years ago, or the web in the 90s. With this in mind, don’t assume that there’s a significant install base of Amazon or other devices outside of the US (Echo is only currently available in the US, UK and Germany).
  • Voice shouldn’t be everyone’s priority: Likewise, don’t assume that all groups of patients have equal needs from Voice. A higher digital priority for many manufacturers will continue to remain improving the quality and accessibility of content and services they may already offer through the web or mobile.


Ultimately, Healthcare Must be “Human”

Above all, remember: when it comes to voice technology, getting the tech side of things sorted is likely to be the easy part. It's getting the 'voice' right that's going to prove most challenging to pharma. And, as the current state of PILs (and most patient-facing content) demonstrates, a natural, human, encouraging and supportive voice doesn't necessarily come naturally to the industry.

When we fall in love with the voices we talk with, it's not just because of what they say but because of how they say it: their choice of words; their tone; and their personality.

There's never been a better time to find your own brand's voice.

Just ask.

Disclaimer: INC Research/inVentiv Health is not associated with any of the companies or products mentioned in this article.

1 http://www.alexadiabeteschallenge.com/announcing-finalists-five-solutions-show-voice-techs-potential-diabetes-care/