Build an iOS game powered by Core ML and Watson Visual Recognition  

Use Watson Visual Recognition and Core ML to create a Kitura-based iOS game that has a user search for a predetermined list of objects

Last updated | By David Okun, Sanjeev Ghimire, Anton McConville

Description

Whether you are identifying pieces of art in a museum or creating a game, there are many use cases for computer vision on a mobile device. With Core ML, detecting objects has never been faster, and with Watson Visual Recognition and Watson Studio, creating a model couldn’t be easier. This code pattern shows you how to create your own iOS game to challenge players to find a variety of predetermined objects as fast as they can.

Overview

In this code pattern, you will create an iOS timed game that has users find items based on a list of objects. The list of objects is customizable and uses Watson Visual Recognition to train a Core ML model. The Core ML model is deployed to the iOS device when the user initializes the app. The beauty of Core ML is that the recognition is done on the device rather than over an HTTP call, meaning it’s that much faster. The code pattern also uses Kitura to power a leaderboard, Cloudant to persist user records and best times, and push notifications to let a user know when they have been removed from the top of the leaderboard.

The application has been published to the App Store under the name “WatsonML,” and we’d like for you to try it out. It comes with a built-in model for identifying six objects: shirts, jeans, apples, plants, notebooks, and a plush bee. Also included are instructions on how to modify the application to fit your own needs. Feel free to fork the code and modify it to create your own conference swap game, scavenger hunt, guided tour, or team building or training event.

When you have completed this code pattern, you should understand how to:

  • Create a custom visual recognition model in Watson Studio
  • Develop a Swift-based iOS application
  • Deploy a Kitura-based leaderboard
  • Detect objects with Core ML and Lumina

Flow

  1. Generate a Core ML model using Watson Visual Recognition and Watson Studio.
  2. User runs the iOS application for the first time.
  3. The iOS application calls out to the Avatar microservice to generate a random user name.
  4. The iOS application makes a call to Cloudant to create a user record.
  5. The iOS application notifies the Kitura service that the game has started.
  6. The user points the phone’s camera as they search for items, using Core ML to identify them.
  7. The user receives a push notification if they are bumped from the leaderboard.

Related Blogs

Related Links

Lumina

A camera designed in Swift for easily integrating Core ML models – as well as image streaming, QR/Barcode detection, and many other features.