In this post, I’ll be covering a new code pattern for analyzing customer feedback with Watson Discovery. The code pattern outlines how you can use Watson Discovery and Watson Knowledge Studio to build a powerful web application for understanding insights from customer feedback in natural language. The code for this pattern is freely available on Github.

In the case of this app, you begin with a publicly available dataset of several thousand food reviews created by real Amazon customers. The dataset comprises semi-structured data points, ranging from numbers to categories to natural language food reviews. This code pattern lets you see the review data in aggregate, ask natural language questions about the data, plus see answers to pertinent questions (leveraging proprietary Discovery enrichments, of course).

The Dashboard

The main page, a.k.a. the Dashboard, shows an overview of all reviews. The Dashboard has four main components: a set of high-level filters, a set of enrichment filters, relevant customer reviews, and a data visualization section. High-level filters, located at the top of the Dashboard page, let you filter by a review’s emotional valence (positive, negative, or all), product type, and reviewer ID. In the case of a dataset like this, for example, you could determine if a reviewer by the name of Walter White had left negative reviews about baking soda.

Enrichment filters are one of the most impressive features of Watson Discovery. This app showcases them in all their glory. Let’s say we have used the high-level filters to drill down to the list of negative reviews on baking soda. Our enrichment filters can then be deployed, enabling us to see, say, the common Keywords or Concepts at play. If multiple reviewers have complained about the consistency or the packaging, it will be reflected in the drop-down menu items on offer for Keywords and Concepts. Click one of these menu items, and the related customer reviews will appear. (Whether Walter White’s will be among them, we cannot say due to anonymization.)

Indeed, relevant customer reviews are another hallmark of this app. Reviews in isolation are meaningless. But reviews that surface trends or that can be called up in response to user questions are immensely valuable. For this reason, query responses – be they a word cloud or a sentiment chart – are accompanied by customer reviews relevant to the query. Review metadata (such as publish date, rating, or helpfulness) are also included next to the review text.

Finally, data visualizations are an integral part of this code pattern. While developers can of course call upon a wide range of visualization types, we have pre-populated the data viz section of the UI with a sentiment chart and word cloud, which are typically quite meaningful in response to food review questions.

So that’s the Dashboard in a nutshell – a glanceable interface that displays much of what you might want to know about review data in aggregate.

However, to truly understand your customer’s feedback, a dashboard is necessary but not sufficient. Therefore, we’ve added three additional tabs to the default app interface.

Additional views

The interactive queries tab lets you use natural language to ask a question. The natural language search field is accompanied by our standard filters: emotional valence, product type, and reviewer ID. This way, if you want to know something very specific about your customer reviews, you can submit a natural language search with that particular question (for example, “which coffee maker received the most negative reviews?”). You can refine such results with the adjacent filters as well. For example, to see negative reviews about coffee makers authored by a specific reviewer, you could select a reviewer via the reviewer drop-down menu.

The common queries tab is currently designed to show five common but innately complex queries, along with their respective answers. They are complex because they cannot be answered without thoroughly understanding and analyzing the natural language text. For example, you can quickly identify products of a certain category, say coffee makers that have the highest positive sentiment but the lowest scores from reviewers – an anomaly that would not have been surfaced without unstructured text analysis. Queries are answered with a word cloud, top five entities, and relevant reviews themselves.

We hope that you find this code pattern valuable and that you’ll be able to adapt it to your own use cases. Best of luck!

Join The Discussion

Your email address will not be published. Required fields are marked *