Build a serverless, event-driven application that classifies images in the cloud

Before cloud technologies came along, it was quite challenging and time consuming to install, configure, and monitor the infrastructure needed to build an event-driven visual recognition application. Now, with IBM Cloud Functions and IBM Cloud services, it is quick and easy.

In this tutorial, we will build a serverless, event-driven application that classifies images in the cloud. We’ll use these services:

In our sample app, whenever a message is produced on a specific topic on the Event Streams service instance (using a producer function), a visual recognition function on IBM Cloud Functions will be triggered. The message contains the image and the function binds to a Visual Recognition service on IBM Cloud in order to process the image and return the result. Our visual recognition app works without a server cost and does not lose any data because its data is queued on the Event Streams service and is processed asynchronously.

Simple architecture of serverless visual recognition app

In this architecture:

  1. An image will be produced to a specific topic on IBM Cloud Event Streams service instance via a producer function.

  2. Whenever an image is produced in the topic, an Event Streams trigger triggers the Cloud Functions classify function.

  3. The Cloud Functions classify function processes the image to the Visual Recognition service.

  4. The Visual Recognition service classifies the image and returns the result in JSON format.

Prerequisites

An IBM Cloud account. (Sign up for an IBM Cloud Lite account, a free account that never expires.)

Also, you need to install these command line tools:

Estimated time

You can complete this tutorial in about 30 minutes.

Steps

Step 1. Create your first serverless function

  1. Log in to your IBM Cloud account.

  2. Go to Manage > Account. Find Cloud Foundry orgs under Account resources. Here, when you click on your organization name, you will see your default space and default region. (If you have already a couple spaces, be careful which space you are working on.) Select this default region for service instances that we will create in next steps.

  3. On the IBM Cloud dashboard, from the hamburger menu on the left, select Functions.

  4. Click Actions. Create your first action. You can give any name to your function. You can leave the Package and Runtime as they are.

  5. From the code page of your action that opened, click the Invoke button to run a demo of the Node.js “Hello World” function. The results and logs from running the action are displayed on the right side of the page.

Step 2. Log in to your IBM Cloud Account using the command line

Open a terminal window or command prompt, and issue this command:

ibmcloud login

Then, issue this command:

ibmcloud target --cf

To verify that you logged in to the right organization and namespace, which contains the demo function that you just created, issue this command:

ibmcloud fn action list

If you have previously invoked Cloud Functions, you can verify that you can see your past activations and any of the activation results by issuing these commands:

ibmcloud fn activation list

ibmcloud fn activation result < _Activation ID_ >

Step 3. Create the Visual Recognition service instance and bind it to Cloud Functions

  1. In the IBM Cloud Catalog, create a Visual Recognition service. Select the Lite plan and accept the defaults. Take note of the auto-generated service credentials that are created when you create the service instance.

  2. To install the Visual Recognition package in to your namespace, use these IBM Cloud Functions commands:

    • Clone the Visual Recognition package repo: git clone https://github.com/watson-developer-cloud/openwhisk-sdk

    • Deploy the package: ibmcloud fn deploy -m openwhisk-sdk/packages/visual-recognition-v3/manifest.yaml

    • Verify that the package is added to your package list: ibmcloud fn package list

  3. To bind the Visual Recognition service credentials to the Cloud Functions actions, you must you must target the same resource group that contains your Visual Recognition service instance in order to bind it successfully. Use a command like this one:

    ibmcloud fn service bind watson-vision-combined visual-recognition-v3

    The output will be something like this: Credentials 'Auto-generated service credentials' from 'watson-vision-combined' service instance 'VisualRecognition-3m' bound to 'visual-recognition-v3'

  4. Verify that the package is configured with your Visual Recognition service instance credentials.

    ibmcloud fn package get visual-recognition-v3 parameters

    Example output:

    Credentials 'Credentials-1' from 'watson-vision-combined' service instance 'Watson Visual Recognition' bound to 'visual-recognition-v3'.
    

You can also see “__bx_creds” paramater has been added to your actions under visual-recognition-v3 package.

For example if you select classify function and go to its parameters from the UI, you should see:

bxcreds

Step 4. Classify the images

We will be using the classify action in the Visual Recognition package.

Because all IBM Cloud Functions actions require a version parameter, we must first update the classify action and add the last updated date parameter with the value from the Visual Recognition service, which you can see in your Resources list in IBM Cloud:

Last-updated

Note that your service’s Last Updated date as YYYY-MM-DD.

  1. In the IBM Cloud Dashboard, go to Cloud Functions page and select the classify action from Actions tab.

  2. From the left navigation, click Parameters.

  3. For the version parameter, add the last updated date that we noted above.

    Adding last updated date on the version parameter

  4. Before we can specify an image URL as a parameter, we need to update our classify action and add some code. From the left navigation, click Code. Then, add the following code under the service = new VisualRecognitionV3(_params); line:

    _params["url"] = _params["source_url"]

    Adding line of code to classify action

    In this tutorial, we use a random image URL, but you can also feed in an image file formatted in base64 or as binary data. Review the Visual Recognition API documentation for more information.

  5. Back on the Parameters page, add a source_url parameter, and specify an image URL. For example, specify this URL: https://cdn.britannica.com/89/149189-050-68D7613E/Bengal-tiger.jpg

    source_url

  6. Click Save button in order to save the action and its parameters.

  7. Back on the Code page, click the Invoke button. In the Activations pane, you’ll see results like those shown below.

    source_url

Step 5. Create an Event Streams service instance

  1. In the IBM Cloud Dashboard, create an Event Streams instance.

  2. In the Event Streams service instance dashboard, open the Service credentials and click New credentials.

    Make a note of your credentials as we will use them in the next step.

  3. From the Manage tab, create a new topic. Specify a name for your topic and keep the other default values.

Step 6. Create an Event Streams trigger

  1. In the IBM Cloud Functions dashboard, from the menu on the left, click Triggers.

  2. Select Create > Event Stream. Name your trigger.

  3. Select Input your own credentials and specify the service credentials from the previous step.

  4. Open the dashboard for your new trigger, and add the classify action from the Select Existing tab.

  5. From the menu for the trigger, select Endpoints. Click API-KEY and make a note of the CF-based API key from the page that pops up. Also make a note of the URL for the endpoint.

Step 7. Create a message producer for the Event Streams trigger

Above, we feed visual recognition manually with the ‘source_url’ parameter. Now, we need to produce a message to our Event Streams topic that contains an image URL in order to trigger our visual recognition function. While there are many language options, we will use Node.js.

  1. First, delete the source_url parameter that we added manually before from the classify action.

  2. Now we can create producer function which will feed our Event Streams topic we specified above. Go to Cloud Functions and select Actions from the right side. Create a new action, select Node.js runtime, and paste the following code:

      function main(params) {
      const fetch = require('node-fetch');
      const headers = {
          'Content-Type': 'application/json',
          'Accept': 'application/json',
          'Authorization': 'Basic ' + params.base64APIKEY
      };
      const url = params.es_url;
      fetch(url,
          {
              method: 'POST',
              body: JSON.stringify({source_url: params.image_url}),
              headers: headers
          })
          .then(function (res) {
              return res.json();
          }).then(function (body) {
          console.log(body);
      });
      }
    
  3. Add these parameters to your message producer action: base64APIKEY, es_url, and image_url. For the base64APIKEY parameter, you need to encode the Event Streams trigger API-KEY that you noted in the previous step to base64 format. For the es_url parameter, specify the URL of the Event Streams trigger. Finally, you can give any image URL you want as for the image_url parameter. For example you can give:

    https://cdn.britannica.com/89/149189-050-68D7613E/Bengal-tiger.jpg

    creds

  4. Save the action and its parameters.

Step 8. Test your serverless visual recognition app

Now, It is time to test our project. When we produce a message to the topic, the classify action should be successfully invoked by the Event Streams trigger.

  1. Go to to producer function which we have just created and invoke the producer function.

  2. In your terminal window or command prompt, view your activation list by issuing the following command:

    ibmcloud fn activation list

    Your results might look something like this:

    Results of activation list command

  3. To check the status of the classify in the results or logs, copy the Activation ID of the classify action and then issue the following command:

    ibmcloud fn activation result < _ActivationID_ >

Your output should be similar to this:

Outout

Summary

In this tutorial, you quickly and easily created a simple serverless, event-driven visual recognition application using IBM Cloud Functions, IBM Event Streams, and the Visual Recognition service.

If you would like to explore more about IBM Cloud Functions, this tech talk can help you learn how to deploy your first serverless app using IBM Cloud Functions. Or, perhaps now you are ready to explore another tutorial that shows you how to use AI services with IBM Cloud Functions. You can also explore the IBM Cloud Functions documentation and the IBM Event Streams documentation for more ideas on how to combine these capabilities for your own solutions.

Kubilay Ceylan