In a world of accelerating technological advances and continuous disruption, enterprises across industries are embracing promising technologies like cognitive computing to maintain their leadership and competitive advantage. According to a recent IBM study including insights from 6000 senior executives, 73% of global CEOs say cognitive computing will play an important role in the future of their organizations and 50% of global CEOs surveyed said they plan to adopt cognitive computing by 2019.

Key Cognitive Patterns and Initiatives for Enterprises

Working closely with organizations embracing and adopting cognitive technologies, we’ve identified several cognitive patterns pursued by most enterprises across industries. Some of these cognitive initiatives include:

  • Conversational Agents
  • Expert Advisors
  • Omni-channel Engagement / Consumer Advisors
  • Personalization, Segmentation, Targeted Marketing
  • Customer Insights, Voice of the Customer, Campaign Analytics
  • Discovery, Insights
  • Content Enrichment
  • Robotics

Some of these patterns, namely conversational agents, expert advisors, and consumer advisors involve the development of a question answer solution with the main difference being the domain knowledge, the type of questions asked, and the kind of expected answers. To better understand the difference between these patterns and the corresponding question/answer solutions, consider the scenario of developing an assistant for answering questions about a certain disease. For the consumer advisors pattern, the end-user persona is that of a patient and as such, most of the questions asked would fall in the Frequently Asked Questions (FAQ) category. On the other hand, the expert advisors pattern addresses the persona of the doctor or medical professional and as such, most of the questions asked would be more detailed and require a search capability across a large corpus of data.

In the rest of this blog, we offer a methodology for developing such question/answer solutions leveraging Watson Conversation Service (WCS) and Watson Discovery Service (WDS). As illustrated in Figure 1, Watson Conversation Service (WCS) is a cognitive service offered on the IBM Cloud to serve at the center of user engagement by leveraging the following components:

  • Intents: Understand what the user means based on what he/she writes or says.
  • Entities: Entities are terms (words or phrases) in the user input that provide clarification or context to an intent.
  • Context Variables: Context variables are used to communicate useful information between WCS and the application/orchestrator.
  • Dialog: Dialog tree orchestrates the interaction with the user based on intents, entities, and context variable.
Figure 1: Watson Conversation Service at the center of user engagement
Figure 1: Watson Conversation Service at the center of user engagement

Watson Discovery Service (WDS), another cognitive service offered on the IBM Cloud, is an integrated, automated set of APIs that enable developers to extract insights from large amounts of structured and unstructured data. As shown in Figure 2, WDS ingests, enriches, and indexes massive amounts of data from a variety of sources and offers a powerful query language as well as a natural language query capability to return contextualized, ranked answers at scale.

Figure 2: Watson Discovery Service enables developers to extract insights from large amounts of data
Figure 2: Watson Discovery Service enables developers to extract insights from large amounts of data

We present this methodology with a focus on Watson Conversation Service (WCS) and Watson Discovery Service (WDS). However, the methodology applies when using other services that enable chatbot functionality such as Amazon LEX, Microsoft LUIS and Bot framework, and Google and enterprise search capability such as Microsoft QnA, Microsoft Azure Search API and Google Search API.

Methodology for Developing Question Answer Solutions

Depending on your application, Watson Conversation or Watson Discovery may be better suited for your needs and in several scenarios, we find that a combination of WCS and WDS is needed to address a wide range of questions within the scope of the application. Figure 3 outlines a recommended methodology for developing question/answer solutions using WCS and WDS. The methodology consists of an iterative process involving the following steps:

1.)  Collect realistic end user questions. This is an extremely important step as any cognitive system is only as good as the data feeding it. All the literature on cognitive systems would recommend “representative” training data where “representative” means the data used for training the system is similar to the data expected when the system is functional in production. For example, if the cognitive solution is designed to answer questions about a medical condition from patients, then you don’t want to train that system on questions on that medical condition from doctors. Ideally, a database of representative questions already exists from existing chat system or call center records. If not, then one of the following approaches is recommended:

  • Crowd-source the collection of representative data. With this approach, it is important to target a representative sample of end users.
  • Release the solution in a staggered manner where it is released in stages with every release introducing a controlled number of new users to interact with the system and provide questions that could improve the training of the system.

2.)  Identify and collect a corpus of documents that include the knowledge needed to deliver the responses to end-user questions. The content collection and pre-processing should yield answers that are consistent with the solution requirements and desired user experience. For example, if the answers are to be provided via a predominantly text channel (i.e SMS, twitter, etc), you might choose shorter/concise documents. If the answers will be provided via a richer user interface (i.e. web or native mobile app), the documents might be selected from slightly longer documents and might possibly have non-text information (i.e images). Note, that the requirements and decisions on content types will also drive use of other components to store the content/answers.

  • Storing answers directly in Watson Conversation and/or Watson Discovery.
  • Storing answers in a separate answer store (object store or other database).

3.)  Cluster the questions into groups where similar questions map to same intent (meaning).

4.)  Plot the frequency of the identified intents. Our experience suggests you will notice a trend as shown in Figure 4 where a large number of questions express a small number of intents (<100). These are typically the FAQ-style questions (blue background) where users express the same intent in many different ways and ask the question frequently. The remaining questions (green background) are referred to as the long-tail questions.

Figure 3: Question Answer Solutions leveraging Watson Conversation Service (WCS) and Watson Discovery Service (WDS)
Figure 3: Question Answer Solutions leveraging Watson Conversation Service (WCS) and Watson Discovery Service (WDS)

5.)  For FAQ-styles questions, Watson Conversation service can be trained to provide the responses to such questions by training the intents and extracting the relevant entities. If entities are missing or the question is ambiguous, then WCS can handle the disambiguation and collection of required information via dialog. Train WCS on defined intents and entities and build the required dialog to collect all the required information.

6.)  For the long-tail questions, a search-based solution such as Watson Discovery service is recommended. Setup WDS to crawl, convert, ingest and enrich the corpus of documents identified in step 1. Experimenting with your data is a necessary step to identify the optimal configuration for converting your documents into the right format and enriching the unstructured text with the most relevant meta-data such as keywords, entities, sentiment, and categories. Once the corpus of documents is ingested into WDS collections, you can improve the results via relevancy training, a feature in WDS that provide more relevant responses to natural language queries based on a training set of queries and associated responses with relevance labels.

Figure 4: Question distribution, blue is short tail (WCS) and green is long tail (WDS)
Figure 4: Question distribution, blue is short tail (WCS) and green is long tail (WDS)

7.)  An important phase before moving on to create the composite solution is to test the WCS and WDS components separately. For WCS, you may be testing the performance of the classification model (with metrics like accuracy, recall, precision, etc) and for WDS you may be testing the relevancy model (with metrics like NDCG). The exact metrics being tested and acceptance criteria desired will vary by the component and the use case. However, the general approach will be as follows:

  • Split the questions (labeled with intents) into train/test sets randomly.
  • Use the training set to train the machine learning engine.
  • Leverage the test set to evaluate the performance of the machine learning models.
  • Update the training set and iterate until your target metrics are achieved.

For more details on evaluating the machine learning models, please consult this blog – Train and evaluate custom machine learning models of Watson Developer Cloud.

8.)  Once WCS and WDS are set up and trained, there are multiple approaches you can take to integrate them into your question/answer solution.

  • One common approach is to have your application/orchestrator send the question initially to WCS. Where the WCS implementation is designed to either contain an adequate answer in the dialog output or a predefined flag that directs the orchestrator to request an answer from WDS. This flag can be triggered based on low confidence of the response or it can be triggered based on the intent/entity identified (i.e. classification of known long-tail type questions). How the application/orchestrator reacts to this trigger flag can vary, two possibilities are:

i.) The application/orchestrator can send the exact question to WDS, either as a natural language query or regular query.

ii.) The application/orchestrator can look up an answer directly that might be referenced in the output. For example, WCS might contain a document id and set the trigger flag. The application/orchestrator would then know to request that specific document from WDS.

  • Another variation for the question/answer solution involves extracting information from WCS that would help WDS in identifying the most relevant response (essentially building a more structured query statement in WDS from the information extracted in WCS). For example, if the WDS documents include some meta-data that identifies the intent which that document is best suited to respond to, then a query would pass through WCS for intent classification and then a query is made in WDS that involves a filter query on the extracted intent and a natural language query which consists of the original question. Another variation of the query sent to WDS might use only the entities extracted from WCS to build a query and the intent as a category filter.

Once the solution is set up, it is critical to continually collect new questions from end users, identifying which questions are not adequately addressed by the solution, and updating the training data to improve the system.


Question answer use cases are some of the most common and most challenging patterns amongst cognitive patterns as they require understanding what the user is asking and providing the most relevant response(s). Collecting real end-use questions and understanding the distribution of the representative questions are critical steps in guiding the solution because different question types are better handled with different tools. For example, Watson Conversation Service is the recommended service for handling FAQ-style questions while Watson Discovery Service is a better option for handling long-tail questions. The orchestration logic of a question answer solution can be quite complicated deciding not only between responses from WCS and WDS but rather extracting information from WCS to better guide the cognitive search in WDS.

Learn more about Watson Conversation and Discovery

Join The Discussion

Your email address will not be published. Required fields are marked *