Join us for Code @ Think 2019 | San Francisco | February 12 – 15 Register now Limited availability
By Sam Couch | Published October 8, 2018 - Updated October 8, 2018
API ManagementArtificial IntelligenceData ScienceDeep LearningMachine Learning
Watson on the IBM Cloud lets you integrate artificial intelligence (AI) into your applications and store, train, and manage your data in a secure cloud. With Watson Visual Recognition, you can quickly and accurately tag, classify, and train visual content using machine learning.
In this tutorial, you’ll learn how to integrate Watson services in your iOS application and how to use Core ML APIs to enable artificial intelligence features directly on a device.
To use this tutorial, you’ll need:
Note: If you don’t want to deploy the model to an iOS device, the only requirement is an IBM Cloud account.
It should take you approximately 30 minutes to complete this tutorial.
To create the model used in this tutorial training images must be provided to the classifier. To save time, we prepared the data for you.
Each class that you create should have at least 10 images. However, for optimal training time and accuracy it’s best to have approximately 200 images sized at 224 x 224 pixels. Also, it’s a good practice to have images from an environment comparable to what you expect to classify. For example, because this is an iOS app, photos from your smartphone camera are probably ideal verses professional photos or image search results. Also, try and change the background, lighting, and any other variables that you can think of!
Download the training image data set and unzip the contents to your desktop.
If you’d like to take your own training photos feel free to do so. Again, make sure to have at least 10 photos for each classification. Then, prepare your data by classifying each image.
Create an empty folder for each type of cable
Sort through your images, putting each in the correct folder based on the type of cable.
After all of your images are in the correct folder, archive each folder to create individual .zip files.
Note: On OS X, “compress” is used to create a .zip file.
To register your account on Watson Studio:
Note: Select United States as your region.
To register your account with an Apple ID:
After you’ve created an account, navigate to Watson Studio. You should see a welcome screen, but if not click Get started and you should see something like the following image.
This image shows the Watson Studio home page. From here, you can create or open new projects. To return to this screen from anywhere within Watson Studio, click IBM Watson in the upper left corner.
Now, you want to create a new project, so click New project.
A window opens with a grid of choices. Select Basic and click OK.
Name your project and click Add to add a Cloud Object Storage instance to your project. Object Storage is used to store all of your training images
Note: If you already have an Object Storage instance it is automatically selected.
Choose a pricing plan and click Create. Then click Confirm.
You should be redirected back to your project setup page. Click Refresh and your Object Storage is found and attached.
Note: If your Object Storage instance doesn’t show up, keep refreshing the page until it does.
After your project is created, you are directed to your new project’s page. If you need to find your way back to this page, you can get to it by clicking Projects > My First Project.
The most important tabs are Assets and Settings.
The settings page is where you finish setting up your project. You can also change things like the name and description of your project. However, what’s important here is to attach a new Visual Recognition service to your project.
First, ensure that Visual Recognition is checked under Tools. This gives you different abilities such as creating a new Visual Recognition model. After you’ve selected this, make sure you save the change.
To be able to create a model, you must start a Watson service called Visual Recognition. You do this by clicking Add service under Associated services.
Select Visual Recognition.
Choose a pricing plan like you did for Object Storage, then click Create and Confirm.
Note: If you already have a Watson Visual Recognition service, you can choose it by clicking the Existing tab.
The Assets page is where all of your training data will live (It’s also where your visual recognition models will live).
You can upload the training files that you created earlier by clicking browse or by dragging them onto the drop zone.
You should see a list of your training data.
The next part of the process is creating your own custom visual recognition model. You can create a model by either going to Assets and clicking New visual recognition model under Visual recognition models or by clicking Add to project from anywhere in your project and choosing Visual recognition model.
When you create a new model you should automatically be directed to the models training area. If you need navigate back to this page, select Projects > My First Project, then select your model under the Visual recognition models section from the Assets tab .
Create a new class for each type of connector.
Then drag the ZIP files from the side panel onto the corresponding class.
After all your classes are complete, you are ready to train your model. Press Train Model.
Training time can vary depending on the amount of training data. A good approximation is a few seconds per image. Because you have around 200 images, you can expect the model to take approximately 5-10 minutes to train.
In the meantime, you can start preparing the iOS app.
From Launchpad, search for terminal and click the icon to open the application.
Clone the project with the following command.
git clone https://github.com/bourdakos1/visual-recognition-with-coreml.git
Change to the project directory with the following command.
Get the Watson SDK by running the following command.
carthage bootstrap --platform iOS
Open the project directory in Finder. You can do this with the following command.
Double-click the Core ML Vision.xcodeproj file to open the project in Xcode.
Core ML Vision.xcodeproj
In Watson Studio, navigate to your project’s Assets tab.
Open your model and copy your ModelID. Keep it handy to use later.
Open the associated visual recognition service.
Navigate to the Credentials tab.
Copy your “apikey”. Keep it handy to use later.
Open the CameraViewController.swift file and add your ModelID.
Open the Credentials.plist file and add your apikey.
Now you’re ready to test your application. First, you’ll make sure the app builds on your computer. The simulator should open and the app should display. Because the simulator does not have access to a camera and the app relies on the camera to test the classifier, you will deploy the app to an iPhone using the next step.
To run in the simulator, choose a phone option from the drop-down menuand click Run.
To deploy the app:
Select the project editor (the name of the project with a blue icon).
Under the Signing section, click Add Account.
Log in with your Apple ID and password.
You should see a new personal team created.
Close the Preferences window.
Now you must create a certificate to sign your app with.
Change the bundle identifier to com.ibm.watson.<YOUR_LAST_NAME>.coreML-demo.
Select the personal team that was just created from the Team menu.
Now you’re ready to run the app!
You should see something like the following image after you’ve run the app and tested it by taking a picture.
Now you have the tools necesary to build your own application using Watson Visual Recognition and Core ML. To extend this example, you could also create multiple custom models, each to identify another aspect of the items that you want to classify. For example, you could build a model to identify “electronic accessories,” then another to identify the “type of cable” or “type of hardware.” Have fun!
September 26, 2018
November 9, 2017
Back to top