Taxonomy Icon

Artificial Intelligence

Watson on the IBM Cloud lets you integrate artificial intelligence (AI) into your applications and store, train, and manage your data in a secure cloud. With Watson Visual Recognition, you can quickly and accurately tag, classify, and train visual content using machine learning.

Learning objectives

In this tutorial, you’ll learn how to integrate Watson services in your iOS application and how to use Core ML APIs to enable artificial intelligence features directly on a device.

Prerequisites

To use this tutorial, you’ll need:

  • MacOS 10.11 El Capitan or later
  • The latest version of Xcode
  • iOS 11 or later (on your iPhone or iPad if you want the application to be on your device)
  • Carthage 0.29 or later
  • An IBM Cloud account

Note: If you don’t want to deploy the model to an iOS device, the only requirement is an IBM Cloud account.

Estimated time

It should take you approximately 30 minutes to complete this tutorial.

Steps

Prepare your data

To create the model used in this tutorial training images must be provided to the classifier. To save time, we prepared the data for you.

Each class that you create should have at least 10 images. However, for optimal training time and accuracy it’s best to have approximately 200 images sized at 224 x 224 pixels. Also, it’s a good practice to have images from an environment comparable to what you expect to classify. For example, because this is an iOS app, photos from your smartphone camera are probably ideal verses professional photos or image search results. Also, try and change the background, lighting, and any other variables that you can think of!

Get the training data

Download the training image data set and unzip the contents to your desktop.

If you’d like to take your own training photos feel free to do so. Again, make sure to have at least 10 photos for each classification. Then, prepare your data by classifying each image.

  1. Create an empty folder for each type of cable

  2. Sort through your images, putting each in the correct folder based on the type of cable.

  3. After all of your images are in the correct folder, archive each folder to create individual .zip files.

    Note: On OS X, “compress” is used to create a .zip file.

Register your accounts

Watson Studio

To register your account on Watson Studio:

  1. Go to IBM Cloud and sign up for a free account or log in.

Note: Select United States as your region.

Apple ID

To register your account with an Apple ID:

  1. Go to https://appleid.apple.com.
  2. Follow the steps to log in with your username and password or create a new account.
    • If you already know your Apple ID username and password, you can skip this step.
    • If you do not want to deploy the model to an iOS device, the only requirement is an IBM Cloud account.

Create a new project

After you’ve created an account, navigate to Watson Studio. You should see a welcome screen, but if not click Get started and you should see something like the following image.

This image shows the Watson Studio home page. From here, you can create or open new projects. To return to this screen from anywhere within Watson Studio, click IBM Watson in the upper left corner.

Now, you want to create a new project, so click New project.

A window opens with a grid of choices. Select Basic and click OK.

Name your project and click Add to add a Cloud Object Storage instance to your project. Object Storage is used to store all of your training images

Note: If you already have an Object Storage instance it is automatically selected.

Choose a pricing plan and click Create. Then click Confirm.

You should be redirected back to your project setup page. Click Refresh and your Object Storage is found and attached.

Note: If your Object Storage instance doesn’t show up, keep refreshing the page until it does.

Click Create.

Understand your project

After your project is created, you are directed to your new project’s page. If you need to find your way back to this page, you can get to it by clicking Projects > My First Project.

The most important tabs are Assets and Settings.

Settings

The settings page is where you finish setting up your project. You can also change things like the name and description of your project. However, what’s important here is to attach a new Visual Recognition service to your project.

First, ensure that Visual Recognition is checked under Tools. This gives you different abilities such as creating a new Visual Recognition model. After you’ve selected this, make sure you save the change.

To be able to create a model, you must start a Watson service called Visual Recognition. You do this by clicking Add service under Associated services.

Select Watson.

Select Visual Recognition.

Choose a pricing plan like you did for Object Storage, then click Create and Confirm.

Note: If you already have a Watson Visual Recognition service, you can choose it by clicking the Existing tab.

Assets

The Assets page is where all of your training data will live (It’s also where your visual recognition models will live).

You can upload the training files that you created earlier by clicking browse or by dragging them onto the drop zone.

You should see a list of your training data.

Create a model

The next part of the process is creating your own custom visual recognition model. You can create a model by either going to Assets and clicking New visual recognition model under Visual recognition models or by clicking Add to project from anywhere in your project and choosing Visual recognition model.

When you create a new model you should automatically be directed to the models training area. If you need navigate back to this page, select Projects > My First Project, then select your model under the Visual recognition models section from the Assets tab .

Create a new class for each type of connector.

Then drag the ZIP files from the side panel onto the corresponding class.

After all your classes are complete, you are ready to train your model. Press Train Model.

Training time can vary depending on the amount of training data. A good approximation is a few seconds per image. Because you have around 200 images, you can expect the model to take approximately 5-10 minutes to train.

In the meantime, you can start preparing the iOS app.

Get the code

  1. From Launchpad, search for terminal and click the icon to open the application.

  2. Clone the project with the following command.

     git clone https://github.com/bourdakos1/visual-recognition-with-coreml.git
    

  3. Change to the project directory with the following command.

     cd visual-recognition-with-coreml
    

  4. Get the Watson SDK by running the following command.

      carthage bootstrap --platform iOS
    

Configure the application

  1. Open the project directory in Finder. You can do this with the following command.

    open .
    
  2. Double-click the Core ML Vision.xcodeproj file to open the project in Xcode.

  3. In Watson Studio, navigate to your project’s Assets tab.

  4. Open your model and copy your ModelID. Keep it handy to use later.

  5. Open the associated visual recognition service.

  6. Navigate to the Credentials tab.

  7. Copy your “apikey”. Keep it handy to use later.

  8. Open the CameraViewController.swift file and add your ModelID.

  1. Open the Credentials.plist file and add your apikey.

Test the application

Now you’re ready to test your application. First, you’ll make sure the app builds on your computer. The simulator should open and the app should display. Because the simulator does not have access to a camera and the app relies on the camera to test the classifier, you will deploy the app to an iPhone using the next step.

  1. To run in the simulator, choose a phone option from the drop-down menuand click Run.

Question for Sam: Which drop-down menu? Devices??

![](https://wdc.objectstorage.softlayer.net/v1/AUTH_7046a6f4-79b7-4c6c-bdb7-6f68e920f6e5/Code-Tutorials/watson-visual-recognition-with-core-ml-single-model/images/xcode_main_select_sim.png)

Deploy the app to ian OS device

To deploy the app:

  1. Select the project editor (the name of the project with a blue icon).

  2. Under the Signing section, click Add Account.

  3. Log in with your Apple ID and password.

    You should see a new personal team created.

  4. Close the Preferences window.

Now you must create a certificate to sign your app with.

  1. Select General.

  2. Change the bundle identifier to com.ibm.watson.<YOUR_LAST_NAME>.coreML-demo.

  3. Select the personal team that was just created from the Team menu.

  4. Plug in your iOS device.
  5. Select your device from the device menu to the right of the Build and run icon.
  6. Click Build and run.
  7. On your device, you should see the app appear as an installed app.
  8. When you try to run the app the first time, it will prompt you to approve the developer.
  9. In your iOS settings, navigate to General > Device Management.
  10. Tap your email, and tap trust.

Now you’re ready to run the app!

You should see something like the following image after you’ve run the app and tested it by taking a picture.

Summary

Now you have the tools necesary to build your own application using Watson Visual Recognition and Core ML. To extend this example, you could also create multiple custom models, each to identify another aspect of the items that you want to classify. For example, you could build a model to identify “electronic accessories,” then another to identify the “type of cable” or “type of hardware.” Have fun!