The Call for Code 2019 Global Challenge is now open for submissions. Learn more
Sam Couch | Published October 8, 2018
API ManagementArtificial intelligenceData ScienceDeep LearningMachine LearningObject Storage
Watson on the IBM Cloud lets you integrate artificial intelligence (AI) into your applications and store, train, and manage your data in a secure cloud. With Watson Visual Recognition, you can quickly and accurately tag, classify, and train visual content using machine learning.
In this tutorial, you’ll learn how to integrate Watson services in your iOS application and how to use Core ML APIs to enable artificial intelligence features directly on a device.
To use this tutorial, you’ll need:
Note: If you don’t want to deploy the model to an iOS device, the only requirement is an IBM Cloud account.
It should take you approximately 30 minutes to complete this tutorial.
To create the model used in this tutorial training images must be provided to the classifier. To save time, we prepared the data for you.
Each class that you create should have at least 10 images. However, for optimal training time and accuracy it’s best to have approximately 200 images sized at 224 x 224 pixels. Also, it’s a good practice to have images from an environment comparable to what you expect to classify. For example, because this is an iOS app, photos from your smartphone camera are probably ideal verses professional photos or image search results. Also, try and change the background, lighting, and any other variables that you can think of!
Download the training image data set and unzip the contents to your desktop.
If you’d like to take your own training photos feel free to do so. Again, make sure to have at least 10 photos for each classification. Then, prepare your data by classifying each image.
Create an empty folder for each type of cable
Sort through your images, putting each in the correct folder based on the type of cable.
After all of your images are in the correct folder, archive each folder to create individual .zip files.
Note: On OS X, “compress” is used to create a .zip file.
To register your account on Watson Studio:
Note: Select United States as your region.
To register your account with an Apple ID:
After you’ve created an account, navigate to the Starter Kit landing page. We have a large catelog of starter kits that come pre-configured with Watson Services – it’s a simple way to start any project! To get started, click Create app.
From here, you can name your apps and select the pricing plan for your visual recognition service – select Lite (the free tier) for this app.
Now, you want to create a new project, so click Create.
On the back end, the starter kit is allocating the propper services and preparing the workspace. Now you’re ready to navigate to the Visual Recognition dashboard. Because the starter kit already allocated the service, you can click Launch Tool. Launching the tool opens a new tab, but make sure to keep both open.
Now you’ll begin creating your custom model. Start by clicking Create Model within the box labeled Classify Images.
The first time you create a custom classifier, it prompts you to create a new project. To create custom models, you will use Watson Studio, so you must first create the project where your models and training images will live.
Name your project and click Add to add a Cloud Object Storage instance to your project. Object Storage is used to store all of your training images
The next part of the process is creating your own custom visual recognition model. First, you can rename your model by clicking on the name and typing your own name. Now, you must upload your training images. On the right side, click Browse, and navigate to the location where you dowloaded the training images. Select and upload the images by clicking Open.
Next, we’ll create a new class for each type of connector. We’ll do this by dragging the ZIP files from the side panel onto the Create a class box, a new class will be created using the name of the ZIP file. Alternatively, we can click Create a class, define a name for the class, and then drag the correct files to each defined class.
After all your classes are complete, and you are ready to train your model. Press Train Model.
The training time can vary depending on the amount of training data. A good approximation is a few seconds per image. Because you have approximatey 200 images, you can expect the model to take 5-10 minutes to train.
After the training is complete, navigate back to the tab with the app starter kit page where you began.
You’ll see in the top-right corner, a blue button labeled Download Code. An iOS app with starter code is generated, and it will even grab your Visual Recognition credentials as well as custom classifier information.
After the app is finished generating the code, save the zip file to your computer and unarchive it.
Note: Because I named the app My First Custom Model, the directory is named accordingly. If you named it something else, the following steps must be changed to reflect the name you chose.
From Launchpad, search for terminal and click the icon to open the application.
Change to the directory with the following command.
Get the Watson SDK by running the following command.
After the dependencies have been built by CocoaPods, no more actions are needed. The unique credentials to your Watson Visual Recognition service have been injected into the application during the code generation.
Now you’re ready to test your application. First, you’ll make sure the app builds on your computer. To do this, open the MyFirstCustomModel.xcworkspace file (it’s important to not open the .xcodeproj file).
This opens the Xcode project. One thing you’ll need to change is the Bundle Identifier to something unique. Try adding .lastname to the end, using your last name.
The app should look like this if all goes well.
If there are any issues with the build, check the General tab for errors. You might have to sign in with your Apple ID to sign the app. Depending on simulators installed on your computer, you might also have to change the Deployment Target as well.
To deploy the app:
Select the project editor (the name of the project with a blue icon).
Under the Signing section, click Add Account.
Log in with your Apple ID and password.
You should see a new personal team created.
Close the Preferences window.
Now you must create a certificate to sign your app with.
Now you’re ready to run the app!
You should see something like the following image after you’ve run the app and tested it by taking a picture.
You now have the tools necesary to build your own application using Watson Visual Recognition and Core ML. To extend this example, you could also create multiple custom models, each to identify another aspect of the items that you want to classify. For example, you could build a model to identify “electronic accessories,” then another to identify the “type of cable” or “type of hardware.” Have fun!
Back to top