Taxonomy Icon

Artificial Intelligence

Deploy a Core ML model with Watson Visual Recognition

Get the code View the demo Build on IBM Cloud

Summary

With Core ML, developers can integrate a trained machine learning model into an application. Watson Visual Recognition now supports Core ML models. This code pattern shows you how to create a Core ML model using Watson Visual Recognition, which is then deployed into an iOS application.

Description

Imagine that you’re a technician for an aircraft company and you want to identify one of the thousands of parts in front of you. Perhaps you don’t even have internet connectivity. So how do it? Where do you start? If only there was an app for that. Well, now you can build one!

Most visual recognition offerings rely on API calls to be made to a server over HTTP. With Core ML, you can deploy a trained model with your app. Using Watson Visual Recognition, you can train a model without any code; simply upload your images with the Watson Studio tool, and then deploy a trained Core ML model to your iOS application.

In this code pattern, you’ll train a custom model. With just a few clicks, you can test and export that model to be used in your iOS application. The pattern includes an example data set to help you build an application that can detect different types of cables (that is, HDMI and USB), but you can also use your own data.

When you have completed this code pattern, you will know how to:

  • Create a data set with Watson Studio
  • Train a Watson Visual Recognition classifier based on the data set
  • Deploy the classifier as a Core ML model to an iOS application
  • Use the Watson Swift SDK to download, manage, and execute the trained model

This pattern will get you started with Core ML and Watson Visual Recognition. And when you’re ready to deploy something in production? Try the IBM Cloud Developer Console for Apple to quickly create production-ready applications with Core ML.

Flow

flow

  1. Import and tag images.
  2. Train, test and deploy a Watson Visual Recognition model for Core ML.
  3. Run the application using to classify image using the Core ML model on the device.
  4. Get feedback from the user/device for iterative training in Watson.

Instructions

Ready to put this code pattern to use? Complete details on how to get started running and using this application are in the README.