We’re giving away 1,500 more DJI Tello drones. Enter to win ›
Get the code
View the demo
Build on IBM Cloud
by Iain McCown, Ajiemar (Taj) Santiago, Sridhar Sudarsan, Devin Conley, Laksh Krishnamurthy, Glenn Fisher | Published March 13, 2018
AnalyticsArtificial intelligenceContainersData scienceCloud
With Core ML, developers can integrate a trained machine learning model into an application. Watson Visual Recognition now supports Core ML models. This code pattern shows you how to create a Core ML model using Watson Visual Recognition, which is then deployed into an iOS application.
Imagine that you’re a technician for an aircraft company and you want to identify one of the thousands of parts in front of you. Perhaps you don’t even have internet connectivity. So how do it? Where do you start? If only there was an app for that. Well, now you can build one!
Most visual recognition offerings rely on API calls to be made to a server over HTTP. With Core ML, you can deploy a trained model with your app. Using Watson Visual Recognition, you can train a model without any code; simply upload your images with the Watson Studio tool, and then deploy a trained Core ML model to your iOS application.
In this code pattern, you’ll train a custom model. With just a few clicks, you can test and export that model to be used in your iOS application. The pattern includes an example data set to help you build an application that can detect different types of cables (that is, HDMI and USB), but you can also use your own data.
When you have completed this code pattern, you will know how to:
This pattern will get you started with Core ML and Watson Visual Recognition. And when you’re ready to deploy something in production? Try the IBM Cloud Developer Console for Apple to quickly create production-ready applications with Core ML.
Ready to put this code pattern to use? Complete details on how to get started running and using this application are in the README.
May 6, 2019
August 8, 2019
June 4, 2019
Back to top