"If it's an easy thing, a chatbot is perfect. But once things get complicated, I prefer talking to a human."
This is a statement I'm sure you've read, heard, or maybe even thought yourself. It's as obvious as it is deceptive, though, because it doesn't really tell us anything. Imagine you're looking for ways to make more people engage with your chatbot, then what would you do if that statement was the only thing you got?
If this dilemma sounds familiar, rest assured: you're not the only one. In fact, even within academia, reasons to not engage with chatbots are unclear or fuzzy. This is why we collaborated with Asbjørn Følstad from SINTEF and Effie Law from Durham University to start investigating it a little more closely: what does it mean when people say "easy" or "complicated"?
What's in a task?
To get an answer to that, we randomly picked 60 intents from our own internal banking module and asked 84 people to evaluate each one. To do so, they rated each task on its risk, required personalisation, trust sensitivity and complexity - and indicate how likely they would be to use a chatbot to perform the task.
So what's in a task? Well, let's cut to the chase: the higher the perceived risk and required personalisation, and the more sensitive to trust the task gets, the less likely people are to use a chatbot to complete it.
Interestingly, complexity had quite little to do with it! Even though people commonly refer to "easy" and "difficult" as reasons to engage with a chatbot (or not), this is perhaps not exactly what they mean. Instead, it's more concrete aspects of the task that may make them reconsider.
Back to the drawing board
Let's go back to the scenario where you're trying to get more of your users to engage with your chatbot. With this new knowledge, you can revisit the intents that are struggling in terms of traffic and look at them in a new light.
Perhaps they are risky. There may be much at stake for your users for these topics, in which case you might have to do some reassurance or provide information about what will happen if things go south (by getting in a human, for example). Or perhaps they require a lot of personalised information from your users. In those cases, you may want to point out that your bot has access to this information - and that any answer they receive will be tailored to their situation.
And there's only one way to find out. You guessed it - conversation review! Need a refresher? Check out the course below.
Get started with conversation review
When a Virtual Agent is live we need to gather insight into how it's performing. We do that by reviewing conversations because it gives us direct insight into exactly what the VA is doing and how our users are interacting with it.