IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Learn how an IBM summer intern created a visual recognition app powered by AI

My name is Bryan Escatel, and I’m a senior at Menlo Atherton High School and an intern working with the Cognitive Applications team at IBM. Ever since I started at IBM, I’ve wanted to learn how to make and develop my own app. In the process of developing this app, I’ve had many ups and downs–and I struggled immensely at first. However, thanks to Upkar Lidder’s help, I created a visual recognition app that uses IBM Watson Visual Recognition service to analyze images, objects and other content.


Install the following:


The IBM Cloud Mobile services SDK uses CocoaPods to manage and configure dependencies.

Step 1. Open terminal on your MAC OS computer.

Search “Terminal” on the search bar on you computer.

Search Terminal image

Step 2. Clone the Repo

git clone the repo and cd into it by running the following command:

git clone && cd watson-visual-recognition-ios

Step 3. Install dependencies with Carthage.

Run the following command to build the dependencies and frameworks:

carthage update --platform iOS

Note: Carthage can be installed with Homebrew.

Step 4. Create an IBM cloud service

Create the following services: Watson Visual Recognition. Copy the API Key from the credentials and add it to Credentials.plist

<key>apiKey</key> <string>YOUR_API_KEY</string>

Step 5. Run the app with Xcode.

Launch Xcode using the terminal: open “Watson Vision.xcodeproj”

Step 6. Test app in simulator.

To run the simulator, select an iOS device from the dropdown and click the ► button

Test app image

Now you’re able to click and drag photos into the photo gallery and select those photos from the app.

Step 7. Run the app on an iOS.

Since the simulator does not have access to a camera, and the app relies on the camera to test the classifier, you should run it on a real device. To do this, you’ll need to sign in the application and authenticate with your Apple ID:

  • Switch to the General tab in the project editor (The blue icon on the top left).

  • Under the Signing section, click Add Account.

Sign in image

After signing in with your Apple ID and password, you’ll need to create a certificate to sign your app (in the General tab) and follow the next few steps:

  • In the General tab of the project editor, change the bundle identifier to: com.<YOUR_LAST_NAME>.Core-ML-Vision.

Bundle identifier image

  1. Select the personal team that was just created from the team dropdown.
  2. Plug in your iOS device.
  3. Select your device from the device menu to the right of the build and run icon.
  4. Click build and run.
  5. On your device, you should see the app appear as an installed appear.
  6. When you try to run the app the first time, it will prompt you to approve the developer.
  7. In your iOS settings navigate to General > Device Management.
  8. Tap your email, and tap trust.

You’re now ready to use the app.

Term details:

  • General: Watson Visual Recognition’s default classification. It will return the confidence of an image from thousands of classes.
  • Explicit: Returns percent confidence of whether an image is inappropriate for general use.
  • Food: A classifier intended for images of food items.
  • Custom classifier(s): Gives the user the ability to create their own classifier.


Example 1: The object scanned was a can. As you can see, IBM Watson detected a 67 percent probability of a can.

Can image

Example 2: The object scanned was a shoe.

Shoe image

As you can see, IBM Watson detected an 81 percent probability of a shoe.

Result image

Updating your app is fairly simple. All you need to do is teach and train your model. The app and model will update all at once when you push the train button so it’s all ready to go.