Create a mobile app with visual recognition capabilities


Now that smart phones are everywhere, we all also have a high end camera in our back pockets or purses. You can take your app even further by interpreting images uploaded by your users. This code pattern gives you the foundation you need to start creating an app that uses Watson Visual Recognition immediately. Or, you can copy and paste the code into an existing application. The Code Pattern lets you select a photo, then it presents labels with tags that relate to that photo along with the accuracy of that tag.


This code pattern makes it very easy to following a Cloud Native programming model that uses IBM’s best practices for app development. If you click on “Build on IBM Cloud” at the top of the Code Pattern, you’ll be able to dynamically provision Cloud services. Those services will be automatically initialized in your generated application. Add a managed MongoDB service or an additional Watson service.

With this code pattern, you will learn how to:

  • Customize Watson Visual Recognition for your unique use case
  • View the tags related to a picture and the accuracy of that tag


Visual Recognition for iOS architecture diagram

  1. The mobile app chooses a selection of images to analyze and sends them to the Watson Visual Recognition Service.
  2. The content is analyzed by utilizing classification analysis to identify scenes, objects, faces, and more. Finally the service’s analysis is returned to the user’s mobile application.


See the detailed instructions in the README file.