Archived | Build an iOS game powered by Core ML and Watson Visual Recognition
Use Watson Visual Recognition and Core ML to create a Kitura-based iOS game that has a user search for a predetermined list of objects
This code pattern is part of the Watson Visual Recognition learning path.
|100A||Introduction to computer vision||Article|
|100B||Introduction to Watson Visual Recognition||Article|
|101||Create an iOS app that uses built-in and custom classifiers||Code pattern|
|201||Build a custom visual recognition model and deploy to an iOS app||Tutorial|
|202||Best practices for using custom classifiers in Watson Visual Recognition||Article|
|301||Build an iOS game powered by Core ML and Watson Visual Recognition||Code pattern|
Whether you are identifying pieces of art in a museum or creating a game, there are many use cases for computer vision on a mobile device. With Core ML, detecting objects has never been faster, and with Watson Visual Recognition and Watson Studio, creating a model couldn’t be easier. This code pattern shows you how to create your own iOS game to challenge players to find a variety of predetermined objects as fast as they can.
In this code pattern, you will create an iOS timed game that has users find items based on a list of objects. The list of objects is customizable and uses Watson Visual Recognition to train a Core ML model. The Core ML model is deployed to the iOS device when the user initializes the app. The beauty of Core ML is that the recognition is done on the device rather than over an HTTP call, meaning it’s that much faster. The code pattern also uses Kitura to power a leaderboard, Cloudant to persist user records and best times, and push notifications to let a user know when they have been removed from the top of the leaderboard.
The application has been published to the App Store under the name “WatsonML,” and we’d like for you to try it out. It comes with a built-in model for identifying six objects: shirts, jeans, apples, plants, notebooks, and a plush bee. Also included are instructions on how to modify the application to fit your own needs. Feel free to fork the code and modify it to create your own conference swap game, scavenger hunt, guided tour, or team building or training event.
When you have completed this code pattern, you should understand how to:
- Create a custom visual recognition model in Watson Studio
- Develop a Swift-based iOS application
- Deploy a Kitura-based leaderboard
- Detect objects with Core ML and Lumina
- Generate a Core ML model using Watson Visual Recognition and Watson Studio.
- User runs the iOS application for the first time.
- The iOS application calls out to the Avatar microservice to generate a random user name.
- The iOS application makes a call to Cloudant to create a user record.
- The iOS application notifies the Kitura service that the game has started.
- The user points the phone’s camera as they search for items, using Core ML to identify them.
- The user receives a push notification if they are bumped from the leaderboard.
This code pattern showed how to create your own iOS game to challenge players to find a variety of predetermined objects as fast as they can. The code pattern is the final part of the Watson Visual Recognition learning path. You should now have a a greater understanding of the Watson Visual Recognition service. If you’d like more information about the service, see the Visual Recognition product page.