Digital Developer Conference: Hybrid Cloud. On Sep 22 & 24, start your journey to OpenShift certification. Free registration

Build and deploy an IBM Maximo Visual Inspection model and use it in an iOS app

This tutorial is part of the Getting started with IBM Maximo Visual Inspection learning path.

Level Topic Type
100 Introduction to computer vision Article
101 Introduction to IBM Maximo Visual Inspection Article
201 Build and deploy a IBM Maximo Visual Inspection model and use it in an iOS app Tutorial
202 Locate and count items with object detection Code pattern
203 Object tracking in video with OpenCV and Deep Learning Code pattern
301 Validate computer vision deep learning models Code pattern
302 Develop analytical dashboards for AI projects with IBM Maximo Visual Inspection Code pattern
303 Automate visual recognition model training Code pattern
304 Load IBM Maximo Visual Inspection inference results in a dashboard Code pattern
305 Build an object detection model to identify license plates from images of cars Code pattern
306 Glean insights with AI on live camera streams and videos Code pattern

Introduction

AI is everywhere! If you were to have a 10-minute conversation with anyone about AI or machine learning, you would no doubt uncover hundreds of use cases, all of which would transform our way of working or make a difference to everyday lives.

But how do you transform that conversation into reality? The world of machine learning can be a daunting place. There are so many resources available that it can be difficult to know where to start. How easy is it to go from that initial “wouldn’t it be cool if” to “look we did it?” The answer is VERY.

In fact, it’s this accessibility and speed of deployment that have been a focus for IBM for some time. Putting data science back in the hands of subject matter experts is essential if machine learning and AI technology are to become ubiquitous. IBM Maximo Visual Inspection is a software product that aims to do exactly that. This easy-to-use tool takes advantage of the PowerAI platform and IBM Power Systems to make the task of managing, training, and deploying machine learning models easy. Learn how in this tutorial.

Learning objectives

In this tutorial, learn how to:

  1. Create a data set
  2. Train a model for image classification
  3. Deploy it to a web API
  4. Integrate the API into an iOS app

Prerequisites

This tutorial assumes that you have has access to Maximo Visual Inspection. Download and install the Technology Preview. Also, because you’re building an iOS app, a Mac is required as well as the latest version of Xcode.

Estimated time

It should take approximately an hour to walk through this tutorial.

Steps

Creating a data set

First, upload your training data to AI Vision. This data set determines what your AI understands, what problems you’re looking to solve, and into what categories you want to be able to classify images.

For this example, you’ll be using a data set about birds from Kaggle. Download the data by going to the Data tab. However, you can also use your own data set.

Select My Data Sets in the upper left of AI Vision.

Data sets

Click Add Dataset, and then For Image Classification.

Adding dataset

Name your data set and select the most appropriate category from the list. If none of the categories seem to fit your use case, then select Other. When you’re finished, click Add Dataset.

Classification options

Now you’re taken to the data set management page, which lets you create, edit, and manage the images in the dataset. Select + Add Category and enter the name for that given category. In this tutorial, we choose to classify different types of birds, so the first category is Larus. Finish by clicking Add Category.

Add category

Next, you need to add your training images for the category, so upload your local files.

Pictures

Repeat these steps for all categories that you intend to support. That’s it! You’ve built the training data set.

All categories

Train a model for image classification

The next step in the process is building the model. Don’t worry if you don’t have a data science background or if you’ve never built a model before. Maximo Visual Inspection is going to take care of everything for you. Select My DL Tasks from the menu in the upper left, then select Create New Task. In this example, you’re building a model for image classification, so on the next screen select Classification.

Service list

Select the training data you created earlier to form the basis of the model. Make sure that the data set appears in the Select Dataset field and that the Training Strategy is Precise First. Name the model, and click Build Model.

Classification

Maximo Visual Inspection processes the training images and builds a model of the data. You’ll be able to see the training progress and accuracy improve on the following screen.

Progress

Deploy to a Web API

After the training process is complete, go to the My Trained Models tab to view the finished product.

Deploy

From here you can view the model accuracy, which lets you know how well the system will perform when it’s in an iOS app. In this example, our accuracy is 76%, which is pretty good, so let’s deploy it. Select Deploy, then Deploy API to confirm your choice.

Confirm

That’s it! You’ve finished building your model and deployed it to the web where it is accessible through an API, all without writing a single line of code! Now let’s integrate it into an iOS app so that you can put it in the hands of your users.

Integrate the API into an iOS App

We’re going to wrap up this tutorial by calling the API in an iOS app. We include a sample GitHub repo that you can use, so there’s only a bit of code that’s really necessary.

Calling the API

The following code shows the main function used to classify the images. Basically, you need to call the API URL that Maximo Visual Inspection provided in the previous step. You create a POST request to the API provided by Maximo Visual Inspection and include the local image from your camera as a payload.

func classifyImage(image: UIImage){
    //URL for your AI Vision instance and model
    let urlString = "AI Vision API URL"

    //Set up HTTP Request Object
    var request  = URLRequest(url: URL(string: urlString)!)
    request.httpMethod = "POST"
    let boundary = "Boundary-\(UUID().uuidString)"
    request.setValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")
    request.setValue("gzip, deflate", forHTTPHeaderField: "Accept-Encoding")

    let imageData = UIImagePNGRepresentation(image)!
    let fileName = "upload.png"
    let fullData = photoDataToFormData(data: imageData,boundary:boundary,fileName:fileName)

    request.setValue(String(fullData.count), forHTTPHeaderField: "Content-Length")
    request.httpBody = fullData
    request.httpShouldHandleCookies = false

The format of the URL depends on the version of AI Vision that you’re running. There’s a full API reference available, but as a guide mine looked like this:

http://reallyawesomepoweraitutorial.com:9080/powerai-vision/api/dlapis/API_CODE_FROM_AI_VISION

Where reallyawesomepoweraitutorial is your PowerAI host and API_CODE_FROM_AI_VISION is the long name from your API deployment. Something along the lines of 1d361a45-ebde-44c5-b086-65b3a8b32e14.

More info on the reference app

As mentioned earlier, we provide a sample GitHub repo that you can use. This simple app lets you upload an image from the camera or camera roll and classify it against the AI that you just built. The file that’s of most interest is ClassificationViewController.swift, which contains the call to the API referenced above. You can also view and edit the app layout in the Main.storyboard file. This is what the app would look like in Xcode.

Template

Running the simulator

Now let’s test that everything is working as expected. Select Play at the upper left of Xcode to run your app in the iOS Simulator.

Xcode

After it launches, choose an image from the camera roll to test the app and your AI.

App

Summary

That’s it! You’ve built a custom image classification model, deployed it to a Web API, and integrated it into an iOS app with only a bit of code. We encourage you to continue to iterate on the provided GitHub repo and create your own interesting iOS app. This tutorial is part of the Getting started with IBM Maximo Visual Inspection learning path, which helps you quickly get up to speed on what Maximo Visual Inspection offers and how to use it. To continue, look at the next code pattern, Locate and count items with object detection.