Digital Developer Conference: Hybrid Cloud. On Sep 22 & 24, start your journey to OpenShift certification. Free registration

Build robust machine learning-based solutions

In 2013, IBM® recognized data as the next greatest natural resource. Shortly after that, IBM CEO, Ginni Rometty noted that companies that make billion-dollar-decisions based on gut instincts instead of deriving insights from predictive modeling of data are essentially setting themselves up for failure.

‘What is Machine Learning?’ is a question that is often asked. Now more than ever, organizations are seeing that integrating machine learning-based solutions can help them stay ahead of the game. A Gartner report released in 2017 predicted that AI technologies will be in almost every new software product by 2020. However, even though they understand the importance of embracing machine learning-based solutions, organizations are struggling to make this leap. Fast forward to 2019, and Gartner reports that only 37% percent of organizations have adapted to AI in some form.

While every organization is bound to have unique setbacks, it’s been found that most of these problems are common across the board. In this article, I discuss the most crucial problems that are faced by organizations and developers on their journey into AI and suggest a few ways to mitigate them.

Dealing with unstructured data

Data is the backbone for building a machine learning model with high accuracy. But, it comes with its own set of challenges. Often, you see that legacy applications have never had the requirement to store extensive historical data. This leads to less or under-represented data. Also, data that is available to train and test the model is stored in multiple sources. Gathering this data from these sources can be cumbersome. To address these issues, several data-gathering tools and techniques are applied during the data preprocessing phase.

The available data is more often represented in unstructured formats such as emails and notes. This data needs to be labeled in a uniform manner for the machine learning algorithms to recognize during model training. After these data sets have been labeled, several supervised machine learning algorithms can be applied on the data sets. However, there are also a few unsupervised techiniques like clustering that can be applied to group data sets that are not labeled.

New to machine learning? Then, get an introduction and understand the fundamentals.

Data privacy and security are two other restraining factors that must be dealt with when sensitive or personal data is involved. However, there are data governance tools that are available that automatically identify these sensitive fields and can provide many options to mask them. Get an overview on how to govern your data using Watson Knowledge Catalog.

Acquire the skills

Within the AI realm, there are several personas that are required to build and manage an AI lifecycle. Data steward, data engineer, data analyst, and data scientist are just a few. Organizations tend to generalize these personas, which can often lead to a lack of a well-rounded team that is essential to success.

Building a predictive model requires an extensive amount of knowledge of several complex machine learning algorithms. Python and R are some of the popular languages that extend the robust libraries that support the building of machine learning-based solutions. In spite of the high demand for machine learning experts around the world, there is not enough availability of people with the needed skill sets.

Learn how to build and test your first machine learning model with Python and scikit-learn now.

Simplify the process to save time

Building a predictive model is time consuming. A typical model building lifecycle involves the process of collecting, preparing, analyzing, and infusing insights into data iteratively until the wanted process efficiency is achieved. Having limited resources does not help with this situation.

Developers often need help with simplifying this AI lifecycle management at scale in an automated way to prepare data, apply machine learning algorithms, and build model pipelines that are best suited to their data sets and use cases, freeing them to focus on specific aspects of the pipeline. Fortunately, services such as AutoAI, an automated machine learning (AutoML) tool, lets you easily generate multiple pipelines. This series is a deep dive into AutoAI and explains how top-performing models can be found and deployed in minutes.

Scale up as your project expands

Organizations typically begin by experimenting on a pilot project before making decisions about switching to AI-based solutions. After convincing results are obtained during this pilot phase, they start the process of creating a scalable solution. One of the biggest pitfalls organizations face during this transition is the failure to foresee the resource needs for a scalable solution. A sample set of data and less powerful processors such as CPUs suffice while developing the pilot project. But, to put these projects into production, they need GPUs, data storage lakes, cloud-based solutions, and other infrastructure requirements that can exponentially increase the cost estimates.

To mitigate some of the infrastructure-based issues, IBM Watson™ Studio offers a cloud-based solution that collaboratively allows developers to perform end-to-end tasks such as preparing data and building models. Watson Studio offers various services such as AutoAI and SPSS Modeler to build models. The Getting started with Watson Studio learning path explores how the various steps involved in building machine learning solution can be handled using this solution.

Instill trust and reliability

So far, the challenges I’ve discussed were mostly technical difficulties faced by developers in trying to implement machine learning-based solutions. But the most important aspect in adapting to newer technologies is the ability to develop trust among the users. While we rely on machine learning algorithms to make critical decisions, it’s also important to ensure that the decisions being made are fair and free from any type of bias.

After the machine learning model is trained and deployed, the predictions that this model makes behave like a black box. If there were ways to reverse engineer and find explanations as to why a certain prediction was made, it would make the models more reliable. If a model is found to not perform fairly during this process, it leads to making adjustments to underlying data or tuning the algorithms to improvise the model. Also, few industries mandate the inclusion of the reasons behind every prediction that was made, and in scenarios like these, model explainability is not an option. Watson OpenScale helps with tracking these outcomes for machine learning models that are built and run anywhere. Learn how to manage production AI with trust and confidence.

Take the next step

In this article, I’ve covered a lot of ground, from the growing interests of organizations in adopting AI-based solutions in their domain to ideas around data availability, skill sets, resources, time-consuming solutions, and infrastructure-related issues as being major impediments to adoption. With that, I suggest Watson Knowledge Catalog, AutoAI, Watson Studio, and Watson OpenScale as possible ways to mitigate some of the issues.

As a next step, you’ll want to investigate some of these areas in more depth, get some hands-on experience with the relevant technologies, and see how we have made strides in packaging and simplifying the adoption of machine learning solutions at the enterprise level. Also, IBM Cloud Pak for Data acts as an all-in-one, cloud-native solution to utilize each of these separate offerings as a package. Start exploring IBM Cloud Pak for Data, where we discuss machine learning case studies on IBM Cloud Pak for Data, a fully integrated data and AI platform.