Rizwan Dudekula and Lakisha Hall also contributed to this blog post.

The industry is trending toward the cognitive era. Cognitive entities possess various features spanning a spectrum of capabilities, including entities that enable natural user interaction by possessing the intelligence to align and refine to become a part of the human community directly or indirectly. Cognitive entities introduced in the market vary in complexity and intelligence. The most common cognitive entities are virtual agents, which are deployed in the field to solve multiple problems in various industries. IBM Watson™ Assistant (formerly Watson Conversation) is among the most widely adopted. The input to Watson Assistant is purely natural and is not bound by any rules. This, in turn, imposes additional responsibilities on Watson Assistant to handle syntactic and semantic errors. Watson Assistant is internally equipped to handle fixes to a greater extent, but there are other mechanisms as well. This post describes boosters for identifying and fixing syntactic errors for the unigrams and bigrams by adapting the existing open source components. Before delving into the module, we reiterate the definitions of unigrams/bigrams and how booster components can influence them.

Boosters, unigrams, and bigrams

Boosters

Boosters comprise an external capability realized through a component or a set of components with its own flows to filter and fix Watson Assistant inputs. Boosters can operate at unigram and bigram levels.

Unigrams

A unigram-based model treats each word as an independent entity, and booster algorithms are applied on it to fix syntactic errors. The user interaction to “actiavte my credit card” will be chunked to get every single word before applying the fixes.

Bigrams

Bigram-based model clusters each word with its adjacent component (both pre- and post-component, and booster algorithms are applied on it); “I want to actiavte my credit card, clusters get formed as I want, want to, to actiavte, actiavte my, my credit , credit card” for example.

Solution context

We adapted the algorithm to handle unigrams and eventually extended to bigrams.

Figure 1: Solution context

Existing unigram algorithm: The original unigram algorithm leveraged a public corpus, which identified most frequently occurring words and defined a public corpus library with weight added for each occurrence. At runtime, each unigram is compared with the public library for edit distances to replace words closer to the ones in the library. This was not suitable for domain-specific environment, so we adapted the unigram booster by augmenting the ground truth corpus with the enterprise-specific corpus and manually cleaned this corpus for specific grammatical fixes. This extracted the domain-specific terminologies, which were used by the spelling corrector for comparison. We got a recall of about 81 percent. But unigram-based edit distance suffered from the following drawbacks as anticipated:

  1. Did not account for context words (pre- and post-words that mandated the use of bigrams).
  2. Sometimes the utterances had words that were of different languages. For example, a Spanish word as “I am looking to comprarr a prepaid”; comprar is a Spanish word for buy, and the user still made a typo in comprar. The unigram we built was meant to identify only English terms and we did observe that it did not pick up comprarr in the execution.
  3. The libraries defined adopted a weight-based model based on the occurrence of the words from both the public and domain-specific corpus. We artificially tuned the weighting of the domain-specific words to morph to this; for example: The word “product” had a weighting of 100 from the public corpus in order to be picked up, so we artificially increased the weight to say 10000.

Bigram algorithm

The bigram algorithm we developed not only focuses on fixes to the individual words but also the association patterns. We adopted the following steps to actualize:

  • In the public corpus, aggregated clusters of duo words
  • Created a public library with weighting for each duo
  • Created an enterprise library with more weighting for common public library duos
  • Extracted the runtime duo and compared; when a matching duo with lesser or equal weight was found in the enterprise library, the bigram was matched to duo and replaced

Note: Duo ground truth should always have allowable clusters and not otherwise.

The output (corrected interaction with replaced duo) was fed to Watson Assistant for classification/extraction and response.

Code snippet:

Figure 2. Node.js-based runtime module for bigrams

Results: We got a recall of about 73 percent. For example, a runtime input bigram identified as “Past Paid” made sense as a duo; within the context of the enterprise, it had to be “Post Paid,” which was fixed by the bigram algorithm.

Future work: We intend to extend our work to trigram-based models to solve scenarios beyond the above. We anticipate that the context to drive trigrams will not only be influenced by the post- and pre- words but also spread across the previous user interactions.

Runtime context

We did a multi-pipeline execution for better results (in other words, we did a unigram-based execution/fix and forked to bigram or trigram).

References: https://www.npmjs.com/package/spelling-corrector

Conclusion

We discussed how Watson Assistant was supplemented using the peripheral booster of unigram- and bigram-based domain adapted models. The unigram-based model had its own disadvantages, and we leveraged bigrams for further refinements. We plan to extend the work to trigrams and anticipate that trigram-based models will have contexts driven from multiple positions.

Join The Discussion

Your email address will not be published. Required fields are marked *