Build your own visual recognition app powered by AI
Learn how an IBM summer intern created a visual recognition app powered by AI
My name is Bryan Escatel, and I’m a senior at Menlo Atherton High School and an intern working with the Cognitive Applications team at IBM. Ever since I started at IBM, I’ve wanted to learn how to make and develop my own app. In the process of developing this app, I’ve had many ups and downs–and I struggled immensely at first. However, thanks to Upkar Lidder’s help, I created a visual recognition app that uses IBM Watson Visual Recognition service to analyze images, objects and other content.
Install the following:
- iOS 8.0+
- Xcode 10 (Required to develop for iOS, and can be found on the Mac App Store
- Swift 3.0+
- IBM Cloud account
The IBM Cloud Mobile services SDK uses CocoaPods to manage and configure dependencies.
Step 1. Open terminal on your MAC OS computer.
Search “Terminal” on the search bar on you computer.
Step 2. Clone the Repo
git clone the repo and
cd into it by running the following command:
git clone github.com/IBM/watson-visual-recognition-ios.git &&
Step 3. Install dependencies with Carthage.
Run the following command to build the dependencies and frameworks:
carthage update --platform iOS
Note: Carthage can be installed with Homebrew.
Step 4. Create an IBM cloud service
Create the following services: Watson Visual Recognition. Copy the API Key from the credentials and add it to
Step 5. Run the app with Xcode.
Launch Xcode using the terminal: open “Watson Vision.xcodeproj”
Step 6. Test app in simulator.
To run the simulator, select an iOS device from the dropdown and click the ► button
Now you’re able to click and drag photos into the photo gallery and select those photos from the app.
Step 7. Run the app on an iOS.
Since the simulator does not have access to a camera, and the app relies on the camera to test the classifier, you should run it on a real device. To do this, you’ll need to sign in the application and authenticate with your Apple ID:
Switch to the General tab in the project editor (The blue icon on the top left).
Under the Signing section, click Add Account.
After signing in with your Apple ID and password, you’ll need to create a certificate to sign your app (in the General tab) and follow the next few steps:
- In the General tab of the project editor, change the bundle identifier to:
- Select the personal team that was just created from the team dropdown.
- Plug in your iOS device.
- Select your device from the device menu to the right of the build and run icon.
- Click build and run.
- On your device, you should see the app appear as an installed appear.
- When you try to run the app the first time, it will prompt you to approve the developer.
- In your iOS settings navigate to General > Device Management.
- Tap your email, and tap trust.
You’re now ready to use the app.
- General: Watson Visual Recognition’s default classification. It will return the confidence of an image from thousands of classes.
- Explicit: Returns percent confidence of whether an image is inappropriate for general use.
- Food: A classifier intended for images of food items.
- Custom classifier(s): Gives the user the ability to create their own classifier.
Example 1: The object scanned was a can. As you can see, IBM Watson detected a 67 percent probability of a can.
Example 2: The object scanned was a shoe.
As you can see, IBM Watson detected an 81 percent probability of a shoe.
Updating your app is fairly simple. All you need to do is teach and train your model. The app and model will update all at once when you push the train button so it’s all ready to go.