IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

New starter kits make it easier to get your machine learning apps to production in a cloud-native environment


Artificial Intelligence and machine learning have sparked innovations that most of us use daily — from cognitive chatbots to product recommendations in our social media feeds to automated language translations and more. Integrating AI and machine learning technologies with cloud-native environments is an increasingly common scenario, driven by use of microservices and the need to scale rapidly. Developers are faced with the challenge to not only build machine learning applications, but to ensure that they run well in production in cloud-native and hybrid cloud environments.

Today, IBM is announcing a new machine-learning, end-to-end pipeline starter kit to help developers build machine-learning applications and deploy them easily and reliably in a cloud-native environment. The starter kits are part of the IBM Cloud-Native Toolkit, an open source collection of assets that provides an environment for developing cloud-native applications for deployment within Red Hat OpenShift and Kubernetes. Assets created with the Cloud-Native Toolkit can be deployed in any cloud or hybrid cloud environment.

These starter kits offer an excellent starting point to operationalizing and industrializing AI-powered applications and making them ready for production, using open source and Red Hat OpenShift technologies. The starter kit speeds up the development, deployment, and innovation with a set of opinionated approaches/tools.

Challenges to enabling machine learning applications to run in production

Moving an application from a Jupyter notebook to a production environment requires numerous components to work together. These components cover a wide range of tasks that developers and administrators have to manage, including microservices frameworks, code analysis support, monitoring/logging support, continuous integration, secure access to service credentials, UI components, DevOps pipeline, Kubernetes YAML files, manage API access to other business logic components, and so on.

Data scientists and developers can now use the toolkit to quickly get started. It enables you to:

  • Create your model as a microservice using MAX Framework and MAX Skeleton
  • Build and deploy on RedHat OpenShift with support of continuous integration (using Jenkins & Tekton CI) and continuous delivery (using Argo CD), code analysis (using SonarQube), logging (using logDNA/sysdig), API support (using support), and health checks.

How the Cloud Native Toolkit enables open source integration

The Cloud-Native Toolkit is an open-source collection of assets that provides an environment for developing cloud-native applications for deployment within Red Hat OpenShift and Kubernetes. The toolkit, created by the IBM Garage, provides a set of accelerators to apply end-to-end open source patterns including GitOps to any code pattern to enable developers, administrators, and site reliability engineers support delivering business applications through the entire software development life cycle (SDLC).

The Toolkit incorporates best practices that increase a developer’s ability to deliver business value, with aims to:

  • Accelerate time to business value
  • Reduce risk through consistent delivery of the models from start to production (for the entire workflow)
  • Quickly ramp up development teams on Red Hat OpenShift and Kubernetes

The following image represents Cloud-Native Toolkit environment components. The environment consists of Red Hat OpenShift or Kubernetes service deployment cluster, a collection of continuous delivery tools deployed into the cluster, and a set of backend services.

Image of the differnt open source projects within the Cloud Native Toolkit Figure 1. Image of Cloud-Native Toolkit components

How can you use the AI toolkit?

If you have trained your model files with inference code and completed your application code, it will take only few minutes to build out production-ready solutions using open source tools, the Cloud Native Toolkit, and Red Hat OpenShift which can run on any cloud.

Follow these steps:

  1. Set up the environment and create the pipeline for the application as outlined here: https://cloudnativetoolkit.dev/resources/workshop/ai/
  2. To verify the pipeline, open the OpenShift web console and select `Pipelines:

    Image of OpenShift web console pipeline view Figure 2. Image of OpenShift web console pipeline view

  3. From the deployed pipeline, you can:

    • Access the object detector application
    • View code analysis report
    • Access artifact repository and container image registry
    • View health report of the application
  4. To access the deployed application, you can select Developer perspective from the OpenShift console, select project and then select Topology from the Console and verify the application running. The deployed application will look like below:

Image of a boy in a green box with a label of "person" next to a teddy bear in a green box with a label of "teddy bear" Figure 3. This image shows an object detector example, this is how an application would look like after deployment

Resources

  • IBM Cloud Native Toolkit Workshop
  • MAX: Free, deployable, and trainable code. A place for developers to find and use free and open source Machine Learning and deep learning models.
  • DAX: Explore useful and relevant data sets for enterprise data science.

Let’s chat

Please feel free to contact us if you have any question about the material or the hands-on lab: