Arise's Learning
Center

Back to Results

02.14.17 | blog
Author: Tadd Wilson

Customer Facing AI: Is it ready? Are we ready?

Courtesy of Alexa, Siri, and Watson, “AI” is the undisputed buzzword d’jour in retail. While AI – “artificial intelligence” – is not new (the Google trend is flat, just one proxy, see below), branded AI is red-hot, and conversational commerce is buzzing. Enterprise applications and personalization could be a perfect fit. But is it ready to go customer-facing? Unsupervised? For some applications, yes; for others, hold tight.

Google Trends Alexa vs. Siri

Clearly, some smart people answer “yes” when it comes to their customers. Amazon’s Alexa powers Echo and likely any number of other touchpoints. Staples has tested an IBM Watson-powered Easy button. In-store robots may be transitioning from hobby to helpful. 

More prosaically, many companies have replaced their interactive voice response (IVR) systems with AI-based virtual assistants as the first line of defense for customer service (the extent of the AI varies from “LOL” to deep).

The biggest technical hurdle for customer-facing AI has been language, as voice is the chief mode of direct human-AI interaction. This hurdle exposes a key distinction: “what is said” differs from “what is meant?” Voice recognition hits the first question, and is measured on accuracy. Natural language processing (NLP) hits at the second question, and is measured on intent.What is said vs. what is intended

Why does this distinction matter? Because customers - like all humans - often mean something other than what they say.

Sometimes, this is easy to identify, for a human. “How many rooms?” Hilton asks. And I say, “One for my wife and me, and one for the kids.” To a human looking at my profile, they know it means “One room for two, one room for four with two beds and a pullout couch.” To a machine, they may hear me accurately, but odds-on, they think the answer is “one” or “one” plus a lot of extra noise that may be relevant.

Sometimes, the “what’s meant” is not just a matter of clarification. “What can I help you with?” Citibank asks. And I say, “Speak to an agent.” Citibank then asks, “Please tell me why you’re calling so I can direct your call.” And I say, “Last statement.” Citibank then says, “I can help you with that. Your last statement balance was …..”. While Citibank’s understanding of what I said was accurate, their understanding of what I meant (namely, “Your horrible transition from American Express at Costco and opaque fee rules have cost me time and money and made me seriously consider just not using this card. Ever. Again. SPEAK TO AN AGENT”), er, wasn’t. Ironically a human would have understood, “Wow, this guy sounds like Michael Douglas in Falling Down. Time to exercise some call control and save the day.”

Just for example.

AI isn’t perfect (Alexa and Siri like to be spoken to a certain way, and sometimes take initiative that’s well-meaning but off-base). But it’s improving (even chatbots, with huge defection rates, are getting betterfast). And the economics are, in Steve Jobs terms, insanely great. AI doesn’t call in sick, or get mad (unless you program it to), or mind that a chat is taking a long time. Scale is digital, not analog.

Siri Customer Service fail

So where should AI play a customer-facing role today? And where should human empathy take the lead? Here are some suggestions.

AI-Human Customer Care Continuum

In general, scenarios with high degrees of context OR heavy penalties for clumsy interaction require more human control (counter-example: robo-advisors for HNWI); scenarios requiring speed, analysis, probabilistic reasoning – OR basic information provision with low penalties for interaction issues – lend themselves to AI-handling. An important note: I expect most scenarios will converge to the middle, as human-AI hybrids outperform “pureplay” rivals. The key difference will be which entity “owns” the interaction.

Graphical AI-Human Customer Care Continuum

In some scenarios, AI could be supported by humans “behind the curtain” when intent becomes unclear – even if the customer never knows they were helped by a human in a sort of reverse Turing test.

In other scenarios, human agents interfacing via voice, chat, or video could be supplementing their interaction with AI-based tools (identificationage estimationemotion detection, journey recreation, fulfillment issue resolution, etc) without explicitly exposing those. In fact, on start-up is using AI not to contain calls but to route them to the customer service rep most likely to resolve them!

Or AI-assisted humans may become a feature, either physically or via remote means, in the same way more advanced medical technologies or selection mechanisms for high-involvement products like wine get branded. PSFK Labs “Future of Retail” report highlights several examples.

Second to last point: a key challenge to be overcome is cooperation between AIs. With a human “interaction owner” how to use AI outputs and “adjudicate” disputes is clear, if not fast. With multiple AIs (whether within the same retailer-controlled space or across brands) ownership and insuring continuity of Cx outside well-defined circumstances remains problematic. Even AI's cooperating with humans is iffy long-term.

Last point: a fundamental constraint on the adoption of AI as not just a tool but THE customer interface is, well, the customer. And today, 83% of consumers want a human to help them resolve cross-channel care issues. Similarly high numbers will be abandon a brand for being a “digital-ghetto” – in other words, not offering human support. Commerce is social.

PSFK Stat

Naturally, when we hit the Singularity, all bets are off.

contact uscontact us