Putting AI to work with Watson

Navigating IBM data and AI products

Many enterprises are now looking at artificial intelligence (AI) and how to use their own data. Enterprise needs vary. Depending on what data is being stored and sent, the data might be required to be kept on-premises. IBM is a great fit for enterprises with strict business needs (like HIPPA or GDPR) because IBM keeps these requirements in mind as the software is written. IBM also provides on-ramps with free-to-use offerings, learnings paths, tutorials, and videos.

In this blog, I’ll look at these sets of data and AI products from a developer’s point of view:

Product families

Before looking at each set of products, let’s take a quick detour and see how they can be used.

Deployment options

Most of the IBM data and AI products can be accessed through:

  1. IBM Cloud: The IBM public cloud offering that offers lite and pay-as-you-go plans
  2. IBM Cloud Pak for Data: A bundle of data and AI add-ons that can be installed on Red Hat OpenShift

The following image shows a representation of each stack for consuming the data and AI offerings on both IBM Cloud Pak for Data and IBM Cloud.

IBM Cloud IBM Cloud Pak for Data
alt alt

NOTE: The Watson Machine Learning Accelerator product suite requires Power hardware.

Watson APIs

These services are the most easily understood, and knowing AI concepts is not required to use them. They each have a REST API that can be called, and are meant to be called at the application level by SDKs. These services use pre-built models and provide a user-friendly way to create user custom models.

alt

Watson APIs:

  • Include services such as Assistant, Discovery, Tone Analyzer, and Visual Recognition
  • Have well-maintained APIs
  • Have SDKs for popular languages (such as the Java language, Node, Swift, and Go)
  • Have generous free tier plans that let you get your hands dirty without excessive costs
  • Are available on the IBM Cloud and as add-ons to IBM Cloud Pak for Data
  • Are truly a great place for developers to start their AI journey

Watson Assistant

The main “chatbot” offering and so much more. Watson Assistant comes pre-trained with industry-relevant content. It has an intuitive editor; provides middleware for integrating with Slack, Facebook, and many other platforms; and can even be integrated with Watson Voice Gateway so that you can talk to your assistant over a phone.

Watson Discovery

The “AI search” offering. Simply put, use the Discovery tool or API to load a bunch of documents (such as .pdf or .doc files) and have Discovery build a model for you. You can query this model with natural language, and Discovery pulls out relevant parts of the documents. You can make Discovery smarter by using “Smart Document Understanding” to ensure documents are read properly or by using “Watson Knowledge Studio” to build a domain or industry-specific model.

Watson Visual Recognition

Explaining this service is pretty simple. Step one is to upload an image and step two is to read the output from the service after it tried to classify the image. That’s it! By default, Watson Visual Recognition comes with two pre-built classifiers (general and food). But where it excels is letting you create your own classifiers. You can use the Watson Visual Recognition tools to upload your own images to serve as “training data,” and Watson Visual Recognition creates the model for you. This model is callable through a REST API or can be exported as a Core ML model for iOS devices.

And more

  • Watson Text to Speech: Text goes in, audio comes out
  • Watson Speech to Text: User’s voice goes in, text comes out
  • Watson Translator: Text goes in, select your source and destination languages, more text comes out
  • Watson Natural Language Understanding: A short string of text goes in, and concepts, entities, keywords, emotions, and sentiment comes out
  • Watson Natural Language Classifier: Build text classifiers based on training data, and then classify text against the models that are built

Tabular representation

Offering On Cloud On Prem Free tier (on Cloud) SDK support
Assistant
Discovery
Visual Recognition
Text to Speech
Speech to Text
Translator
Natural Language Understanding
Natural Language Classifier

Now onto the hard stuff, no more pre-made models. So, let’s move on to Watson Studio.

Watson Studio

Watson Studio gives you an environment and tools to collaboratively work with data. Data can be imported (through connectors), viewed, refined, and analyzed. Models can then be created with the Watson Studio Deep Learning and Machine Learning tools.

Watson Studio image

Watson Studio:

  • Is based on various open source technology such as Jupyter Notebooks and R Studio
  • Provides tools based on popular open source frameworks such as Tensorflow, Keras, and Pytorch
  • Includes 2 GB of free storage on IBM Cloud Object Storage
  • Is available on a public cloud and as add-ons to IBM Cloud Pak for Data
  • Is an excellent platform to begin your data science journey

Jupyter Notebooks

Watson Studio provides support for running Jupyter Notebooks. There are several environments to choose from (Python 3.x, R3.4, and Scala 2.11), and each has the option to add a Spark kernel as well. You can share your Notebook collaboratively, add data from your Object Store as a data frame, publish your Notebook, and use many other of the features you’ve come to expect.

Watson Machine Learning

The Watson Machine Learning service is a major part of Watson Studio. It provides a set of APIs that can be called to interact with a machine learning model. The model can be created using a Jupyter Notebook, SPSS Modeler, or AutoAI, and then deployed to an instance of the Watson Machine Learning service. After it’s deployed, you can score data by using a REST call against an API that the service provides.

OpenScale

Watson OpenScale allows monitoring and management of machine learning models that are built on various platforms: Watson Studio, Amazon Sagemaker, Azure Machine Learning, and other popular open source frameworks.

And more

  • AutoAI: A graphical tool in Watson Studio that automatically analyzes your data and generates candidate model pipelines that are customized for your predictive modeling problem.
  • SPSS Modeler: Watson Studio offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics.
  • Data visualization: Watson Studio provides the ability to visualize data with Data Refinery.
  • Cognos Dashboards: A dashboard for viewing and analyzing data, instead of using pandas or pixiedust.
  • Connections: Watson Studio provides connections to import data from IBM Services (Db2, Cognos, Cloudant, Cloud Object Storage, and many more) and for third-party services (Amazon S3, Hortonworks HDFS, Microsoft SQL Server, Salesforce, MySQL, Oracle, Google BigQuery, and many more).

Watson Machine Learning Accelerator

IBM Watson Machine Learning Accelerator is geared toward enterprises. It is a software bundle that is optimized to run on Power hardware. It bundles IBM PowerAI, IBM Spectrum Conductor, IBM Spectrum Conductor Deep Learning Impact, and support from IBM for the whole stack including the open source deep learning frameworks. It provides an end-to-end, deep learning platform for data scientists.

Watson Machine Learning Accelerator

PowerAI Vision

IBM PowerAI Vision provides a video and image analysis platform that offers built-in deep learning models that learn to analyze images and video streams for classification and object detection. PowerAI Vision is built on open source frameworks and provides sophisticated methods to create models with an easy to understand user interface.

And more

  • Spectrum Conductor: Deploys modern computing frameworks and services for an enterprise environment, both on-premises and in the cloud.
  • Snap ML: A library developed by IBM Research for training generalized linear models with the intent on removing training time as a bottleneck. Snap ML scales gracefully to data sets with billions, and offers distributed training and GPU acceleration.
Steve Martinelli

Use Watson APIs on OpenShift

Before we talk about how to use Watson APIs on OpenShift, let’s quickly define what they are.

  • Watson APIs: A set of artificial intelligence (AI) services that are available on IBM Cloud that have a REST API and SDKs for many popular languages. Watson Assistant and Watson Discovery are part of this set to name a few.

  • OpenShift: Red Hat OpenShift is a hybrid-cloud, enterprise Kubernetes application platform. IBM Cloud now offers it as a hosted solution or an on-premises platform as a service (PaaS): Red Hat OpenShift on IBM Cloud. It is built around containers, orchestrated and managed by Kubernetes, on a foundation of Red Hat Enterprise Linux. You can read more about the History of Kubernetes, OpenShift, and IBM in a blog post by Anton McConville and Olaph Wagoner.

Now, let’s talk about how to combine the two. In our opinion, there are really two ways to use Watson APIs in an OpenShift environment.

  1. Containerizing your application with Source-to-Image (S2I) and calling the Watson APIs directly at the application layer
  2. Using Cloud Pak for Data add-ons for specific APIs (more on this option later)

Let’s dig into the first option.

Source-to-Image

What is S2I?

Source-to-Image is a framework for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. S2I comes with OpenShift but it is also available as a stand-alone tool. Take a look at how simple it is to use S2I through an OpenShift console.

How do I use S2I for my Watson app?

Say you have a Node.js app, and you’d like to deploy it in a container running on OpenShift. Here’s what you do. (Our examples in this section use Red Hat OpenShift on IBM Cloud.)

  1. From the OpenShift catalog, select a runtime (for example, Node.js or Python) and point to a repository.

    add git repo

  2. Add configuration for the application, such as any Watson services API keys, as a Config Map.

    openshift config map

  3. Associate that Config Map with your app.

    openshift add config map to app

And you’re done! The containerized app will be deployed and now can use any existing Watson Service available through a REST API call.

What are the benefits?

  • Minimal refactoring of code
  • Source-to-Image’s ease of use
  • Fastest way to get started

References

We’ve already added OpenShift Source-to-Image instructions for some of our most popular Watson code patterns.

A quick example

We also created a quick video example that demonstrates how to use the approach mentioned above.


Cloud Pak for Data

What is Cloud Pak for Data?

Cloud Pak for Data can be deployed on OpenShift and includes a lot of AI and data products from IBM. These products include, but are not limited to, Watson Studio, Watson Machine Learning, Db2 Warehouse, and Watson Assistant.

How do I use Cloud Pak for Data for my Watson app?

Using our previous example, say that you have a Node.js app running on-premises and behind a firewall. In just a few minutes, you can update the application to call Watson APIs that are running on your Cloud Pak for Data.

  1. (Prerequisite) Install Cloud Pak for Data, on-premises, preferably on OpenShift.

  2. Install the Watson API kit add-on, the Watson Assistant add-on, and the Watson Discovery add-on. The Watson API kit includes Watson Knowledge Studio, Watson Natural Language Understanding, Watson Speech to Text, and Watson Text to Speech.

  3. Launch the Watson API service that you want to use and generate a new API Key.

  4. Update the application to use the new API key and REST endpoint.

What are the benefits?

  • If on-premises, REST calls never hit a public endpoint
  • Some refactoring, mostly at the configuration level

References

We’re still in the process of updating our content to work with Watson APIs on OpenShift, so here are a couple of references instead:

Thanks for reading our blog! Start the journey to containerizing your Watson applications by following our Sample using Watson Assistant or Sample using Watson Discovery. Or, if you’re interested in learning more about Cloud Pak for Data, check out this Overview of Cloud Pak for Data video.

Steve Martinelli
Scott D’Angelo

Behind the code: Connecting GRAMMY Artists with IBM Watson Discovery

The GRAMMYs and IBM had one goal: to bring technology and pop culture together in a single, highly engaging experience. Working together, the Recording Academy and IBM decided to use AI to surface hidden connections between GRAMMY-nominated artists over the years. And the result is something called GRAMMYconnect.

Here’s how it works. Most information about musicians is hidden in “dark” data, a vast universe of articles, biographies, and content across various sources that primarily contain natural language text. Identifying, reading, and understanding this content is a huge challenge. So we turned to Watson to help solve it.

Mining and analyzing unstructured data

Our biggest source of data came from the Watson News database of 14 million articles available through the Watson Discovery Service. We also mined the artist pages on GRAMMY.com as well as other publicly available data sources, like Muzooka. We used Watson Discovery to quickly ingest the unstructured data from these sources. Watson Discovery uses natural language processing to read and enrich each news article and piece of content with metadata, identifying artists, their attributes, and the primary connections between them.

Image showing artist connections Grammyconnect uses Watson Discovery to find uncommon connections between artists

Entity recognition is a powerful part of the Watson Discovery analysis. With it, we were able to identify and categorize entities as “Artist” or “Band,” and dig through our ingested document library to filter down to entity names that matched these criteria. This became the basis of our artist database that is built on IBM Cloudant. We then filtered this enormous data set of entities based on common mentions in the content, identified 50,000 artists and bands, and grouped the data points according to their respective associations.

Entities and their connections Discovery uses NLP to find entities and their connections

We then used Knowledge Graph, a beta feature of Watson Discovery that provides the ability to “query by relationship,” allowing us to target and identify sentences where two artists were mentioned as having a specific type of relationship, like performing the same song or having a common influence. Watson Discovery Knowledge Graph gave us a quick and easy way to populate the GRAMMYconnect experience with tens of thousands of interesting data points that would inform our unique artist-to-artist relationships.

Ranking the connections

We wanted to focus the GRAMMYconnect experience on hidden connections, associations that would surprise the average music fan. To do this, we had to drill-down to the connections that were rare or at least unique enough to be surprising.

We started by getting counts of basic facts, like the awards an artist had won or the albums they produced. We assessed how many artists had that fact in common, which helped us understand how rare it was. Ordinarily, it would be time-consuming to use traditional SQL stored procedures for this, but that’s exactly the purpose for which the IBM Cloudant map-reduce views are built. We developed a system of views that would emit a single count for each unique fact we stored, per Artist entity. Next, the built-in Sum-Reduce function told us the total number of entities in our database that had that fact attached. This was essential in building a ranking algorithm that was both dynamic and performant in calculating matches and ranking those connections across 50,000 artists and millions of facts in a reasonable amount of time.

The basic layer of our ranking algorithm aims to undervalue the most common fact types. Simple biographical facts that occur frequently are the least interesting. So by weighting each fact with the inverse of its frequency, we automatically deprioritize the most common facts across all artists. The second layer of the algorithm down-weights the most obviously connected artists, like bandmates or an artist and their producer, by prioritizing artists with the least number of facts in common.

Next, we wanted to make sure there was evidence behind each of the relationships that were presented. But after we started getting results, it became clear that not all evidence text was created equal. Sometimes we would get a well-written sentence that clearly talked about the two artists. But other times, we would have a basic list that didn’t include much information.

To prioritize the more interesting tidbits, we integrated the NLTK Python library’s part-of-speech tagger and developed a custom algorithm to evaluate sentences based on part of speech frequency and patterns, so that we could automatically prioritize the most interesting sentences, and not include ones that were simple lists.

Creating the front-end experience

To make this experience highly engaging for our audience, we spent a lot of time on the front-end experience, from design to development. It was important to reduce load times on an experience serving up hundreds of thousands of data points in such a dynamic fashion. Because there would be little change to the data informing our connections, coupled with the fact that calculations for finding artist connections would not be performed live, IBM Cloud Object Storage was the ideal service for handling extremely large loads and serving cached JSON data.

We adapted our ranking algorithm to run in Watson Studio, in a high-performance, multiprocessing-aware environment, to calculate over 20 million connections, among 50,000+ artists to run, start to finish, in less than 30 minutes. Results were cached to Object Storage, where they could be made available to the front-end web application through standard HTTP requests.

Of course, the connection data is not the only content available in the GRAMMYconnect experience. We also included artist search and a way to track user preferences, as well as managing what connections are trending based on user interest.

Finally, we turned to the robust IBM Cloud Functions service to develop serverless functions that could run on-demand, at scale, to accomplish these tasks.

Complete GRAMMYconnect solution architecture Complete GRAMMYconnect solution architecture

Beyond the GRAMMYs

The GRAMMYconnect solution brings together a unique combination of IBM Cloud and Watson services to give fans an engaging and appealing experience, driving interest in both the artists and the GRAMMYs. The capabilities used in the solution – including deep understanding of natural language from a huge set of content to identify entities and relationships, performing complex scoring and ranking at scale, and managing interactive, responsive interfaces – are widely applicable across use cases beyond the music industry. Building connections can help uncover cyber threats, proactively identify support issues, or guide maintenance and manufacturing.

Learn more about how Watson is being used is across industries, and find out how to get started with Watson Discovery.

To find connections between your favorite artists, check out GRAMMYconnect yourself!

Anish Mathur

How Multimedia5 uses Watson to make video creation simple and smart

Everyone has the same problem: How do you create high quality, brand-appropriate videos? And how do you do it at the right price point, fast enough, and at scale?

Multimedia5™, a Tampa Bay area, Florida-based tech start-up whose mission is to make video creation simple and smart has developed a first-of-its-kind cognitive-driven video creation platform that integrates with IBM® Watson® Natural Language Understanding to enhance and scale the user experience. The platform enables journalists, publishers, media agencies, marketers, small businesses, teachers, students, and anybody without prior video making experience to transform web content and unique ideas into a video – within an affordable price point in the market and in the shortest amount of time.

This is not your traditional video creation system. This system is a cognitive video creation technology that blends artificial intelligence, art, and the human element, creating a new paradigm shift where video content and creativity are generated by artists, data, and machine learning.

“We are solving a big problem in the industry because the hard part of doing video at scale is very expensive and it involves a lot of steps.” – Marwan Nussairat, Founder & CEO, Multimedia5 and Karthik Balu, Head of Software Engineering, Multimedia5

The company’s cognitive video creation platform understands the video creation process in the way human video producers do.

“We are not attempting to replicate human intelligence or replace human beings. We are using Watson Natural Language Understating technology to automate the video production process, reduce production resources, and enhance/scale the user experience.” – Marwan Nussairat

How it works

Multimedia5’s platform uses the power of IBM Watson Natural Language Understanding to give its users the most advanced intelligent content analysis and media mapping search. Users can enter a keyword, add scripts, or enter any public URL and the platform can determine important keywords ranked by relevance, as well as perform analysis, understand text, and identify general concepts that might not have been directly referenced in the text.

The platform extracts insights from the content such as concepts, entities, keywords, categories, relations, and semantic roles. NLU’s capabilities help automate video creation by producing a rough-cut storyboard and suggesting the most relevant image and video footage to use with the video script.

Using keywords and concept analysis from NLU, the Multimedia5 platform automatically pulls the most relevant licensed media content from a built-in media library to help users create video content in a matter of minutes. The Multimedia5 platform can also analyze the overall emotion of the content and suggest relevant music tracks for users to use in the video making process.

multimedia5platform

Multimedia5 with IBM Watson gives its partners the most advanced artificial intelligence story building blocks to make meaningful story-driven connections with people and solve common business and marketing issues such as:

  • Brand awareness
  • Audience engagement
  • Traffic and website conversion

The company’s technology platform is the most efficient and effective solution to develop creative smart video content that engages users and optimizes content for maximum social interaction and learning. With Multimedia5’s video creation platform, even people who aren’t professional video creators can quickly produce high-quality videos that are brand appropriate, at-scale, and at the right price point.

Martin Nussairat

Create an AI feedback loop with Continuous Relevancy Training in Watson Discovery

Artificial intelligence (AI) systems thrive on data, and usually the best source of this data is feedback from users interacting with your applications. Creating a virtuous feedback loop from your application to the AI services enabling it can help the system improve automatically without significant investment. Watson Discovery helps create this feedback loop through a set of new logging and events APIs, metrics generation, and Continuous Relevancy Training. With these capabilities, you can use Discovery to instrument and collect data from your application to see how users are interacting, including the queries they run and the results they select. This data is then automatically used by Discovery to generate metrics on usage, and perform Relevancy Training to improve the order of results returned.

figure 1

It all starts with collecting data. Discovery enables the data collection today through two ways.

  1. Automated data collection of queries and associated results. Any queries run against a Discovery environment get logged (unless opted out) and can be viewed from the /logs endpoint. These queries are valuable for analyzing user behavior and understanding opportunities to improve content or configuration.

  2. Collection of interaction data through the events API. Discovery provides an API that can be embedded in your application to track events that can be used as signals to understand usage and improve the service. This Events API currently supports tracking click events. Click events usually are associated with an action where a user selects a result from the list returned from Discovery, for example, selecting to see the full document text or clicking to expand a section of a document. These click events provide an indication of results that users believe to have some relevance, with the order of clicks also providing insight into relevance.

Taken together, this data helps provide a view into user behavior. These data points can then be used to generate metrics like those available through the Analytics tab in Discovery today. These metrics are an important way to track the performance of your application over time. They include:

  • Total queries run: See the volume of queries users are running over time to understand adoption and trends

  • Total queries with an associated event: Similar to a click-through rate, this can be an indicator of quality of results being returned and provide information on how the results might be improving

  • List of common query words: Used to see what kinds of queries users are doing so that you can focus efforts on improving content or training

Discovery also can use the feedback data collected to perform Continuous Relevancy Training to automatically improve results for certain types of queries. Continuous Relevancy Training learns from the collected data to rerank document results in the best possible order.

Continuous Relevancy Training uses state-of-the-art machine learning techniques to learn from fewer interactions. This means that different size organizations can use continuous learning without needing to have operations at global web scale. It can be used in across-enterprise use cases like customer service agent assist to provide agents with more relevant results from huge amounts of documentation to help them resolve customer issues quickly.

To use Continuous Relevancy Training, your data collection needs to be set up as previously mentioned and must meet the following requirements:

  • Collect at least 1000 queries with an associated click on a result in 30 days

  • Query Discovery at the environment level with the complete set of collections in that environment and track events at this level, too, so there is consistency between training and query

Continuous Relevancy Training combines the query and events logs to create a training set and then produces a model for reranking similar to one that can be created explicitly using Relevancy Training. To make the best use of Continuous Relevancy Training, there are a few considerations to keep in mind:

  • The query logs and events let you collect data on a larger scale, so it is not necessary to review individual events. Discovery can use the volume of data to find signals even if there are some ambiguous interactions. Instead, focus on getting a great data collection mechanism so that the events represent your users’ selections appropriately.

  • Labeling your data can help manage it going forward. For example, you can remove a set of data from test instances or automated processes that might not reflect real production users. Labeling is done through a customer-id header passed with each API request to Discovery.

  • You can test whether the model is improving results by comparing an environment-level query using natural_language_query to one using just the query parameter (natural_language_query only is supported for training). The boost in accuracy between using those two parameters is the benefit the new model gives you.

  • Explicit relevancy training (done through the training_data endpoint) is still valuable in cases where you are only querying a single collection. This form of training can be kept more curated while the continuous training can improve with behavior over time. Currently, data collected through feedback is not combined with training data explicitly provided.

Continuous Relevancy Training automates the feedback loop from your application. Interactions can automatically feed back to Discovery and be consumed for training. This provides a valuable improvement in relevance without as much investment in large, ongoing manual training. Get started creating your improvement loop today with Watson Discovery!

Note: Logging, Metrics and Continuous Relevancy Training are available in Watson Discovery for Salesforce today. Continuous Relevancy Training is available for only Advanced Plan instances with size S or greater.

Anish Mathur