When we review conversations, we tend to look at it like a back-and-forth between our virtual agent (VA) and the end user. Was that prediction correct? Which answers did the user find useful, and which ones less so? One aspect that we may overlook by taking this one-by-one conversational approach is the bigger picture: how are our users actually navigating our flows? Do they finish them, or do they drop off? If they do, where do we see them losing interest, and where do they go when they skip?
These kinds of questions are typically answered by an approach called process mining, a method that has been applied to chatbot conversations in this paper by Daniel Schloss and Ullrich Gnewuch from Karslruhe Institute of Technology. In the image below you see the output of such a method, which shows the "journey" taken by users through the conversations.
Creating graphs that look like this requires some technical skills and/or software - but the underlying idea is definitely one to take home. Do you know if your end-users finish all of your conversation flows, or where they typically stop (give up)?
So what should you do now?
Consider starting tagging the conversations where you see drop-offs occur, and take some time analysing these conversations. Do you see any recurring patterns? How much effort (time, clicks, words to read) do your users have to exert before dropping off, and can you change that?
From 11.4, there is also a great tool to start measuring drop-off. With A/B tests, it becomes possible to investigate how you can influence drop off! We had a great AI Trainer session on the tool, so if you missed it (or need a rewatch), you can find it here or through the link below!
A/B testing, a look at new tools
One of the biggest new features introduced in v11.4 of the admin panel is the A/B testing action. It allows us to try out new flow designs in a controlled way and to gather data on which one performs the best. This has major implications for how we can design and develop our action flows.