New approaches may finally be transforming conversational AIs into something customers can actually relate to…
Talking to a human in a contact centre rattling off scripted, robotic answers can be maddening enough at the best of times. But what customers were found to dislike even more, according to a recent YouGov survey commissioned by Webhelp, is interacting with actual machines. The survey found that 84 per cent of UK respondents would rather make a complaint to a human, rather than an automated service. And 71 per cent said they wouldn’t even use a brand if human customer service representatives weren’t an option.
But the days of the run-of-the-mill chatbot that keeps repeating the same question when it can’t match anything in our sentence with the keywords in its knowledge base could be numbered. The next generation of automated assistant is the intelligent virtual agent, or IVA. IVAs use methods such as natural language understanding (NLU) and natural language processing (NLP) to better understand and respond to users’ queries, and “intent detection” to more accurately identify what users want. They can also remember past conversations with individual customers, helping them stay in context. These techniques go some way towards the holy grail of actually being able to hold a natural conversation with a chatbot that sounds more or less human, and where you don’t have to chop up your sentences into smaller chunks or worry about your accent.
But we don’t speak to IVAs for a nice chat. We want them to resolve our problems quickly, and preferably with as much efficiency as their human counterparts. And IVAs have had a spotty record at this until now – one was given its marching orders by Swedish digital bank Nordnet, for example, which fired its digital assistant from a customer service position two years ago as her performance underwhelmed both customers and colleagues.
Measuring an IVAs performance is essential to its technological and career advancement. However, Haptic, an AI technology company, maintains that the traditional metrics of customer satisfaction – such as net promoter scores (NPS) or customer satisfaction scores (CSAT) – don’t really give us a clear understanding of how intelligent the conversational AI actually is. Haptic’s proposition is a new metric, called intelligence satisfaction, or iSAT.
In Haptic’s framework, the intelligence of an IVA can be calculated by subtracting the number of calls with a negative outcome (customer drop-offs resulting from negative experience) from the number of successful calls where the query has been resolved. (So called neutral calls don’t feature in the equation, as here the customer drop-off is regarded as a consequence of low customer intent.) Certain factors – such as phrases repeated by both users and the IVA, or the frequency of swearwords during interactions – can make unsuccessful conversations easier to identify. To improve on the conversational AI’s performance, the next step is to establish whether the interaction failed because the AI couldn’t find the data that could have provided a relevant answer, or because the question fell out of the scope of the virtual agent’s competence. Once this has been established, changes in the chatbot’s design can be made accordingly.
But will customers be more willing to interact with conversational AI once they realise that IVAs can get things sorted out for them quickly, 24/7, and with only a thin margin for error?
Intelligent agents obviously aren’t sentient beings, but they can exploit sentiment analysis to “understand” how humans feel, as well as influence their attitudes and behaviour. A team of Japanese scientists even found that the human brain’s predisposition to anthropomorphise – to read human emotions and intents into anything animate or inanimate – may work in conversational AIs’ favour. In their experiment, their cognitive agent appeared on the screen as a chick with human facial features that could express positive human emotions.
The team’s finding was that receiving a congruent positive response from the IVA – a well-timed smile, for example – “can overcome the effect of believing that the agent is a computer program”. By imitating the positive facial expressions of the person it converses with, AI could activate the medial prefrontal cortex of the human brain, where anthropomorphism resides. As contrived as this may sound, it could serve as a key to the human acceptance of IVAs in the long term. And given the attachment we can develop to our smartphones, and the tears we shed over the death of “Oppy” the Mars rover, the new conversational agents are definitely in with a chance of being accepted to some degree by sentimental humans.
If the choice customers need to make is between a kind and competent human and a limited chatbot, it’s a no-brainer. But if they need to choose between an overstretched and inconsiderate individual, and an artificial mastermind capable of something resembling empathy that perhaps tries to assuage the distraught customer with a special offer – and that can simultaneously authenticate their identity though their voice or mouse movements – consumers could well be nudged into finally accepting these robot assistants for what, if not necessarily who, they are.
By Zita Goldman