Taxonomy Icon

Artificial Intelligence

Create an augmented reality application with facial detection

Get the code View the demo Build on IBM Cloud

Summary

Augmented reality provides an enhanced version of reality by superimposing virtual objects over a user’s view of the real world. ARKit blends digital objects and information with the environment around you, taking apps far beyond the screen and freeing them to interact with the real world in entirely new ways. This code pattern combines ARKit with Watson Visual Recognition and a Cloudant database to give you a complete augmented reality experience.

Description

The easiest way to find and connect to people around the world is through social media apps like Facebook, Twitter, and LinkedIn. However, these only provide text-based search capabilities. With the recently announced release of the iOS ARKit toolkit, you can now do a search using facial recognition. Combining iOS face recognition using the Vision API, classification using IBM Watson Visual Recognition, and person identification using classified image and data, you can build an app to search faces and identify them. One use case is to build an augmented reality-based rC)sumC) using visual recognition.

This code pattern explains how to create this type of augmented reality-based rC)sumC)s with Visual Recognition. The iOS app recognizes the face and presents you with the AR view that displays a rC)sumC) of the person in the camera view. The app uses Watson Visual Recognition and Core ML to classify the image and then uses that classification to get details about the person from data stored in an IBM Cloudant NoSQL database. The images are classified offline using a deep neural network that is trained by Visual Recognition.

After completing this code pattern, you should know how to:

  • Configure ARKit
  • Use the iOS Vision module
  • Create a Swift iOS application that uses the Watson Swift SDK
  • Classify images with Watson Visual Recognition and Core ML

Flow

flow

  1. Open the app on the mobile device.
  2. The iOS Vision module detects a face.
  3. Watson Visual Recognition receives an image of the face to be classified.
  4. The app retrieves additional information about the person from a Cloudant database based on the classification from Watson Visual Recognition.
  5. The app places the information from the database in front of the original person’s face in the mobile device view.

Instructions

Find the detailed steps for this pattern in the README. Those steps will show you how to:

  • Clone the ar-resume-with-visual-recognition GitHub repo.
  • Log in to IBM Cloud and create a Visual Recognition service.
  • Create an IBM Cloudant NoSQL database.
  • Install the dependencies.
  • Run the app.