I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
– Abraham Maslow
Chatbots have good, potentially great use cases in the Health and Human Services (HHS) domain. Agencies are keen to benefit from the potential they offer with 53% of respondents to the 2019 State of the State Health and Human Services (HHS) Technology Programs Thought Leaders Survey expressing an interest in chatbots and other digital assistants. But, the recent explosion in chatbot interest exposes a perennial trap. We must be wary of selecting a technology because it is new and shiny, we must be sure that it is the most effective solution for the problem we are trying to solve.
Why does the choice of technology matter so much? Because it is the users who will, as always, have to live with bad technology choices. For HHS Agencies who want to reduce caseworker attrition and engage successfully with citizens, getting it wrong means caseworkers leave the agency and citizens wonât use the engagement channel. So when should we consider a chatbot for HHS use cases? Only after extensive market validation and user research, it is important that we do not hammer a solution to fit a fashionable technology.
We should ensure that a chatbot is the right solution for the problem we want to solve as research suggests the following limitations and issues:
- Users can become frustrated, for example, if the chatbot doesn’t understand them.
- Users often abandon a chatbot interaction if the chatbots doesnât recover from an unexpected input.
- Users may miss important information as the chatbotâs answers are only based on the questions asked of it.
- Chatbot interfaces are limited and cannot handle complexity.
- Users have to make the effort to start the conversation and work out what the chatbot will understand.
- Chatbots should only be used if superior to other options for example, online documentation, contextual support, wizards, etc.
To help you avoid choosing a chatbot when other solutions are more appropriate, IBM Design suggests the following test questions to be asked when considering a chatbot:
1. What is the userâs goal, can it be accomplished more efficiently using traditional user interfaces?
2. Is the process very complex, or could it take a long time?
3. Is the topic better suited to a human interaction, for example is it sensitive or emotive?
4. Will the user need in-depth assistance to achieve the goal?
If the answer to any of the questions is yes, then a chatbot is not the most appropriate technology.
Letâs take a look at some real world examples. Recently, I attended a session on chatbots that have been implemented by HHS agencies. One demo showed how a conversational interface was being used by administrative staff to associate one individual with another. The process was as follows, and remember, that once the user signed in, all the information was entered into the chatbot UI:
- Sign in
- Enter your email address
- Enter first name of Person A
- Enter Person A last name
- Enter their Person A DoB
- Enter Person A social security number
- Enter first name of Person B
- …etc…to Step 18/li>
At step 18, the user is presented with a summary of the details entered, and asked to submit. After submitting, the chatbot explained that data was being processed âin the backgroundâ and that âsome time laterâ the user would receive an email indicating if the process was successful.
The presenter said that this chatbot had been very successful, citing a 90%+ success rate rate. Or to put it another way, one in ten transactions failed.
For me, this chatbot fails tests 1 and 2 above. In this case, a structured data entry form or wizard would allow for real-time data validations, for example, checking a year has 4 numeric characters, and for shortcuts like âsearch and selectâ boxes that save users from manually entering all the information. And while thereâs no guidance on how long is âtoo longâ when it comes to processes, 19 steps feels excessive; will a user will be able to re-orient themselves in the process if interrupted?
At the demonstration, I also saw chatbot examples that met the criteria for good chatbots. One was a chatbot which provided temporary caseworkers, hired to deal with seasonal peaks in demand, with guidance on how to complete common processes, for example,” How do I process a change in income?â. Another showed how a support ticket could be raised to the IT support team by answering five or six questions. Another answered citizensâ frequently asked questions. All of these had potential to enhance the user experience of their respective systems, they had a clear use case, were relatively simple, linear experiences â and, importantly, their effectiveness could be measured easily. These examples show that chatbots work well for simple processes, and as internal aids to augment human knowledge, for example recommending services, or next steps.
To finish up, another quote from IBM Design that summarizes the case for chatbots:
âChatbots can work really well when use cases are simple and clearly defined and the experiences are linear and well constructed. However, we need to remember that chatbots are not always the right solution and we should not be using them just because the technology exists.â
– IBM Design for AI, 2019