This code pattern is part of the Watson Visual Recognition learning path.
|100A||Introduction to computer vision||Article|
|100B||Introduction to Watson Visual Recognition||Article|
|101||Create an iOS app that uses built-in and custom classifiers||Code pattern|
|201||Build a custom visual recognition model and deploy to an iOS app||Tutorial|
|202||Best practices for using custom classifiers in Watson Visual Recognition||Article|
|301||Build an iOS game powered by Core ML and Watson Visual Recognition||Code pattern|
In this developer code pattern, use IBM Watson™ Visual Recognition to showcase various built-in and custom classifiers on IBM Cloud using an iOS app built using Swift. A user can open the app on an iOS-based mobile phone and choose the different classifiers (faces, explicit, food, etc.) they want to use, including custom classifiers. The Watson Visual Recognition service on IBM Cloud classifies and provides the app with the classification results.
The app in this code pattern has support for the following features of Watson Visual Recognition:
- General — Watson Visual Recognition’s default classification returns the confidence of an image from thousands of classes.
- Food — A classifier intended for images of food items.
- Explicit — Returns percentage of confidence of whether an image is inappropriate for general use.
- Custom classifier(s) — Gives the user the ability to create his own classifier.
- Clone the repo.
- Install dependencies with Carthage.
- Set up Watson Visual Recognition credentials.
- Run the app with Xcode.
Ready to check it out? See the all detailed info on GitHub.
This code pattern showcased various built-in and custom classifiers on IBM Cloud using an iOS app built using Swift. The code pattern is part of the Watson Visual Recognition learning path. To continue with the learning path, look at the next step, Build a custom visual recognition model and deploy to an iOS app.