In this how-to we will learn how to integrate IBM Cloud Object Storage and Watson Visual Recognition into an application, and by using these services we will be able to store and analyze images with just a few lines of code.

What is IBM Cloud Object Storage?

Store, manage and access your data via our self-service portal and RESTful APIs. Connect applications directly to Object Storage and integrate IBM Cloud services.

What is Watson Visual Recognition wervice?

An IBM Cloud service that can quickly and accurately tag, classify and train visual content using machine learning.

Learning objectives

This guide explains how to:

  • Create a Node.js web app quickly and update it easily.
  • Store and retrieve images in IBM Cloud Object Storage using Node.js.
  • Use the Watson Node SDK to call Visual Recognition APIs.


In order to complete this how-to, you will need the following prerequisites:

Estimated time

It’s recommended to allocate at least 30 minutes to complete this how-to.


Open a Node.js Web App in IBM Cloud

  1. From the IBM Cloud catalog go to Cloud Foundry apps and choose SDK for Node.js.


  2. After choosing Node.js, in the following page, give your app a name (it must be a unique one) then, click the Create button.


Create IBM Cloud services

  1. Create Watson Visual Visual Recognition service by choosing the service from the catalog



  2. Create Object Storage Openstack Swift service by choosing the storage service, choose the Openstack Swift version.



Save credentials

  1. Have the Watson Visual Recognition and Object Storage credentials handy, we’ll need them in our Node application.



Using the Watson Node SDK

Below is a chunk of code that accepts an argument, filename, saves it in the object store service, and calls Watson Visual Recognition to analyze the filename (presumably it is an image). Note that in this example, we call visual_recognition.detectFaces, part of the Watson Node SDK, which attempts to clasify the image with Watson’s Visual Recognition class that analyzes faces and can recognize certain celebrity faces. You can replace this call with visual_recognition.classify instead, which uses a custom model.

// enter container name
  .then(function (container) {
    // container - the specified ObjectStorageContainer fetched from the IBM Object Storage service
    console.log('accessed to storage');
    container.createObject(filename, './public/images/' + filename)
      .then(function (object) {
        // object - the ObjectStorageObject that was created
        console.log('pushed ' + filename + ' to OS');
          .then(function (object) {
            // object - the specified ObjectStorageObject that was fetched from the container
            console.log('connected to os again and took ' + filename);
            var params = {
              images_file: fs.createReadStream('./public/images/' + filename)
              function (err, response) {
                if (err)
                else {
                  var result = JSON.parse(JSON.stringify(response, null, 2)),
                      age = result.images[0].faces[0].age,
                      gender = result.images[0].faces[0].gender,
                      identity = result.images[0].faces[0].identity;

                  res.render('results', {age : age, gender: gender, identity: identity, filename: filename});
                  //res.send(JSON.stringify(response, null, 2));
          .catch(function (err) {
            // ResourceNotFoundError if the specified container does not exist
            // AuthTokenError if there was a problem refreshing authentication token
            // ServerError if any unexpected status codes were returned from the request
          .catch(function (err) {
            // ResourceNotFoundError if the specified container does not exist
            // AuthTokenError if there was a problem refreshing authentication token
            // ServerError if any

We’ll also need to add our credentials somewhere in the application, so add this in your application’s init function and use your own keys.

visual_recognition = watson.visual_recognition({
    api_key: 'api-key', // enter your watson visual recognition API key here
    version: 'v3',
    version_date: '2016-05-20'

// object storage setup
// enter your object storage credentials into credentials object
var credentials = {
  projectId: 'project-id',
  userId: 'user-id',
  password: 'password',
  region: ObjectStorage.Region.DALLAS

Push your app to IBM Cloud

If you want to push your application to IBM Cloud. Do not forget to configure manifest.yml file. Fill in your domain, name, and host information.

- path: .
  memory: 128M
  instances: 1
  domain: your-domain-adress
  name: your-node app name // your app name
  host: your-host // your host name
disk_quota: 1024M

Open up your terminal and cd into the project’s root directory

npm install
bx login --sso
bx target -o -s space
bx app push

Sample output

You can see some sample output below.




In this How-to guide, you saw the steps to create a Node.js app and use service APIs to store uploaded images and at the end, could analyze these with Watson Visual Recognition.