This post is co-authored by Steve Martinelli and Sridhar Sudarsan

We’re excited to announce that the Watson Visual Recognition service will now support Core ML. This is the latest addition to the Apple and IBM partnership that was forged nearly four years ago. The announcement is taking center stage at Think 2018, showcasing the value this new functionality brings to iOS developers worldwide. This post introduces you to Core ML and Watson Visual Recognition, and explains how to use the various assets available to you.

IBM and Apple logos

Core ML

Core ML, first released in iOS 11, is an Apple framework for running machine learning models locally on iOS-enabled devices, meaning it runs even when the device is offline. Existing models written in Caffe, Keras, Scikit-learn and others can be converted to Core ML.

Core ML logo

Watson Visual Recognition

Watson Visual Recognition is a service on IBM Cloud that enables you to quickly and accurately tag, classify, and train visual content using machine learning. It has built-in classifiers for objects such as faces and food. To get familiar with these classifiers, you can try the online demo. In addition, there’s support for custom models. To create a custom model, you upload several images to the service for training. When the custom model is trained and tested, you can use it in API calls.

Developers, get hands-on experience

We’ve created a code pattern, Deploy a Core ML model with Watson Visual Recognition, that will help you dive into this new functionality. It includes an overview of the code, an architecture diagram and steps, a demo video, and related links, all in one spot. (There are also plenty of other code patterns to check out on IBM Code.)

The folks that create the various Watson SDKs are really smart. They were able to use the existing Watson Swift SDK to make taking advantage of the Core ML features in Watson Visual Recognition a breeze. They’ve created a Github repo to walk users through using the SDK for this scenario.

You can start making use of the Watson Swift SDK Core ML features in just a few lines of code. Here’s an example of how to classify an image with a local model:

visualRecognition.classifyWithLocalModel(image: image, classifierIDs: [classifierId], threshold: localThreshold, failure: failure)

and here’s an example of how to update a local model:

func invokeModelUpdate()
    let failure = { (error: Error) in
        let descriptError = error as NSError
        DispatchQueue.main.async {
            self.currentModelLabel.text = descriptError.code == 401 ? "Error updating model: Invalid Credentials" : "Error updating model"

let success = { DispatchQueue.main.async { self.currentModelLabel.text = "Current Model: \(self.classifierId)" SwiftSpinner.hide() } } visualRecognition.updateLocalModel(classifierID: classifierId, failure: failure, success: success)

Build a production-ready iOS app

At Think 2018, IBM is also announcing a new IBM Cloud Developer Console for Apple. Within the console are several iOS Starter Kits, which are designed to be production-ready in minutes. The Custom Vision Model for Core ML with Watson starter kit is designed to get you up and running fast!

Should I use it in my application?

Yes, it’s a no-brainer! Core ML is optimized for iOS devices, and Watson Visual Recognition services provide a way to create custom models based on the image data you provide, using the tools. You’ll quickly have your app using images more intelligently.

Watson/Core ML buzz

The tech community loves the latest Apple/IBM collaboration. Here’s what they’re saying:


The following are great resources to get started.

Join The Discussion

Your email address will not be published. Required fields are marked *