Skill Level: Any Skill Level

This Recipe shows you how to use a Raspberry Pi, a Camera and the Watson Visual Recognition Service to take a picture and detect if an object of interest is in that image. Custom Classifier Training is Included.



  • Raspberry Pi 3
  • Micro SD card (minimum 8GB)
  • Pi Camera
  • HDMI cable, keyboard, mouse and monitor for initial setup
  • Optional:
    • Raspberry Pi Button
    • Second Raspberry Pi with a display or other output type of output


  • Bluemix/IBM Cloud Account
  • VNC viewer on personal computer


  1. Raspbian and Watson Services


    If the SD card is not already loaded with the latest version on Raspbian, it can be downloaded from the following URL:




    It can then be loaded into the SD card using software like Rufus or Win32DiskImager. Connect the Raspberry Pi with the SD card on power, a monitor, a mouse and a keyboard and when it boots up, run the following command in the Terminal:

    sudo raspi-config


    Select Interfacing Options, VNC and then Enable. You can now connect remotely using the IP address of the Pi. The default username and password are ‘pi’ and ‘raspberry’ respectively.



    To set up the camera on the pi, firstly connect it like the following image:


    Then run raspi-config on the terminal again, select Interfacing Options, Camera and Enable.



    Before setting up Node-Red, update the system’s package list and upgrade the installed packages using the following two commands

    sudo apt-get update

    sudo apt-get dist-upgrade


    This will probably take some time. Node-Red should already be installed in the version of Raspbian that you have, but you need to update it to the latest version. You can do that using the following command:

    sudo update-nodejs-and-nodered


    After a few minutes, when this is completed, you can set Node-Red to start automatically when the Pi boots, using this command:

    sudo systemctl enable nodered.service


    You can start and stop Node-Red using the following commands:




    You can access the Node-Red interface from any browser on the pi if you type or any device’s browser on the same network pi typing <Pi’s IP Address>:1880

    To install the Watson Services nodes, you have to access the Node-Red interface, select the menu (3 horizontal lines at the top right of the screen) and select manage palettes. Select the install tab and type “node-red-node-watson”. Click on install next to the result that will show up. Wait until they are successfully installed.

    On the left-hand side, in the Node Palette you can now scroll down to a category labelled “IBM Watson”.

  2. Bluemix / IBM Cloud Account

    If you do not have a Bluemix/Cloud account, follow this link and create one:



    Visual Recognition Service

    Once you have an account, log in and click on catalog on the top right of the page, and from the list on the left click on Watson (or alternatively search “Visual Recognition” in the search bar).


    You should see something like this. Select Visual Recognition.

    To create the service, choose a name for the service, select your region, your organisation and space.



    Select the free plan and click on Create. Once the Service is created and you open it, select ‘Service Credentials’ from the list on the left. If there are no credentials created, select Create Credentials and then View Credentials. Save the API key that is shown there, as you will need it later.


    With Visual Recognition, you can either use the default classifiers to run through what is identifies and check if your class of interest is detected, or you can train the service according to your requirements. If you want to create a custom classifier, follow the next part of this step. Otherwise, skip to the Cloud App:


    Custom Classifier / Visual Recognition Training:

    Creating a custom classifier or training the visual recognition easy is as easy as dragging and dropping images. You can do that by accessing Service in your Bluemix/Cloud Dashboard and then launching the Visual Recognition Tool.

    Let’s say I want to make a classifier distinguish an image between an empty table, and a table full of food.  This means I want to have 2 classes, one for Food and one for Empty. To start training it, firstly I need to take at least 10 photos of an empty table and 10 photos of a table full of food. The more photos taken, the better for the service. Around 50 pictures for each class are recommended. In addition to those two categories, I need to take photos of a Negative class. In this case, this means take a number of pictures with the table that is neither empty nor full of food. This can include a table with other objects placed on it, and it is useful to have so that the service does not confuse those objects with food.


    Once the pictures are captured, and the Visual Recognition Tool is running, firstly enter the API key of your service. Then select Create Classifier.


    You should then have an interface like the following.


    Firstly, compress your picture categories in a zip file for each class. Then, in the tool, replace the Classifier name. I named my classifier ‘Fodderbot’. Then drag and drop the zip files in the Class Boxes and name each box. I named mine ‘Food’, ‘Empty’ and the last one is ‘Negative’. Finally, click on Create.

    You have now trained your custom classifier.



    Cloud App

    Now go to your IBM Cloud Dashboard and select Create Resource. From that list, go to Boilerplates, Internet of Things Platform Starter, or search for it in the search bar.

    Again, select a unique name for this app and select Create. This might take a few minutes.

    Once the app was successfully created, navigate back to your Dashboard and under Services, select your newly created app’s iotf-service. Then click on Launch, as shown below.



    In order to register your device, follow the instructions from the recipe below:


    Do not forget to save the credentials for your device.





  3. Node-Red on Raspberry Pi

    On the raspberry pi with the camera (Camera Bot), I have included a big red button to initiate the image capturing. You can use any type of button you prefer or have lying around. You can connect it to your Pi’s GPIO pins.



    Below you can find the two methods of using the Visual Recognition Service. The first one runs the Default classifier and scans the returned classes for the word “Food” and the second method trains the Service to run through your custom classifier and return your own set classes.

    Run Node-Red from any browser


    The Node-Red flow used on the Pi is the following:


    Follow the next steps to create your own flow.

    If you have a button, connect it to your raspberry Pi and bring a rpi gpio input node and a switch node. Double click the rpi gpio node and set the number of the pin you connected the button. Connect the switch node to the rpi gpio node like the picture above and then edit the node so that it looks like this:


    This lets the payload pass through only when the input from the button is 1, so when it is pressed.


    If you do not have a button, use an inject node instead. Edit the node, select repeat at a specific interval and tick “inject once at start”.


    Grab an Exec node and double click on it to add the following commands:

    raspistill -o /usr/lib/node_modules/node-red/public/campi/imagecaptured.jpg -q 20

    Untick the ‘append payload’ option and name it ‘Capture Command’. This is the command that takes a photo using the Pi Camera and saves it at a specific path. Connect this node to the switch or inject node.

    Connect a Trigger node to the end of the Capture Command and add the following:


    Then, to access the captured image, use a file node and change the filename to the following:


    change the Output to “a Single Buffer object” and the name to “imagecaptured.jpg”.


    To read the file, add a Function node at the end of the file node and name it “Read the JPEG file”. Then add the following code inside:

    msg.headers = {
    return msg;


    If you have trained a Custom Classifier, go to step 4. If you are using the default classifier, skip to step 5.

  4. Node-Red with Custom Classifier

    If you have created a custom classifier, add a change Node at the end of the Function node. Add the following information inside, and replace “Fodderbot” with the name of your classifier:



    This tells the visual recognition service to run the custom classifier. Connect a Visual Recognition node at the end of the change node and set inside your API key that you saved earlier. Then change the Detect option to Classify an Image.



    The complete results can be seen at msg.result, so you can attach a debug with output to msg.result. In order to get the class of the output. attach this Function node at the end of the visual recognition node:



    var foodclass=0;

    if (msg.result.images[0].classifiers[0].classes[0].class == "Food"){
    foodclass = 1;

    if (foodclass === 1) {
    msg.food = true;
    } else {
    msg.food = false;

    return msg;

    Then attach a switch node to check if msg.food is set to true or false:


    Finally, attach a Watsion IoT output Node and add the credentials of your registered device that you saved earlier.


    Your flow should look something like this:


  5. Node-Red with Default Classifier

    If you are using the default classifier, attach at the end of your flow the visual recognition node and edit it so that it looks like this:


    The results can be seen at msg.result, so you can attach a debug node and set the Output to msg.result.

    In order to run through the list of classes that have been identified and detect if the one you are interested in is one of them, attach a Function Node with the following code in it:


    var i=0;
    var foodclass=0;

    while (msg.result.images[0].classifiers[0].classes[i] && foodclass===0){
    if (msg.result.images[0].classifiers[0].classes[i].class === "food"){

    if (foodclass === 1) {
    msg.food = true
    } else {
    msg.food = false

    return msg;

    At the end of the function node, attach a switch node to return a msg only when the msg.food is true.


    Then a final Function Node that takes all the identified classes and puts them into a list, in case you want to use it.


    var i = 0;
    var Flist = [];
    var cn = 0;
    msg.payload = Flist;

    return msg;

    Finally, attach a Watson IoT output Node and add the credentials of your registered device that you saved earlier


  6. Next Steps

    You have a few options regarding the notification and the output of your data. One of them is use the Cloud/Bluemix App and get notifications such as tweets, emails, SMS or through a Dashboard. Another Option is to use another raspberry pi and get a notification by connecting those two devices through Node-Red and the IoT Service.


    Cloud / Bluemix

    One of the options you have is send it to your Cloud / Bluemix app. From there, you can easily send a tweet, an email, an SMS using twilio or create a web interface using the Node-Red Dashboard. You can do that by navigating to your App’s Node-Red interface add the following nodes.


    Double click on the IBM IOT Node and add the Device ID of the device you had registered earlier, like the following image


    In the Twitter node, you can add your twitter credentials so that it makes a tweet whenever the visual recognition service identifies your object of interest. You can also do that in the Twilio and email Node, to send an email or SMS to notify you.

    If you have basic HTML or CSS knowledge, you can create an interface using the template node of the node-red dashboard.


    Raspberry Pi

    If you have a second Raspberry Pi Kit with a form of output, such as an LED or LCD Display, follow the same steps as the other raspberry pi.

    After the setup is completed, run Node-Red on the Pi and add an input Watson IoT Node.

    Add your registered device credentials and then you can control your output device using the Exec Node, which reads Terminal Commands.


    Useful Links

    Node-Red Dashboard: https://github.com/node-red/node-red-dashboard

    Big Dome Push Button: https://www.endlessmountainsolutions.com/products/big-dome-push-button-3-3-volts-or-5-volts

    Grove RGB LCD Backlight – http://wiki.seeed.cc/Grove-LCD_RGB_Backlight/

    SenseHat – https://www.raspberrypi.org/products/sense-hat/

    SenseHat Recipe with Node-Red – https://developer.ibm.com/recipes/tutorials/connecting-a-sense-hat-to-watson-iot-using-node-red/





  7. Completed Demo Photos

    Camera Bot

    Raspberry Pi, Camera and Big Red Button



    Two versions of how I received notifications

    Version 1 – RGB LED display and Toy Missiles


    Version 2 – Dashboard on an Ipad with Matrix Code Rain animation.




    If you would like help any of these, feel free to contact me.

9 comments on"Use a Raspberry Pi Camera and Watson Visual Recognition to determine if object of interest is in the image"

  1. AlexanderGoff December 08, 2017

    Hello, great recipe but I haven’t got a Exec node on my pi. Do I have to install that especially?

    • AlexanderGoff December 08, 2017

      Please ignore the above – I have solved it. It is in the advanced tab of nodes

    • AlexanderGoff December 08, 2017

      Hello again, I am struggling in getting the node-rednode-watson installed on my pi so that I may use the library nodes. Currently I do not have the node for visual recognition (or any watson tools). I have tried installing them using npm in the command line and tried using the pallette with node-red. [ https://www.npmjs.com/package/node-red-node-watson ]
      Could anyone suggest why? My best suggestion is the file size is too large but I have a gigaByte free.

      • AlexanderGoff December 11, 2017

        I am unsure to why it continued to fail and freeze when installing – I suspect because I have an old pi. The manage pallette method did not work without freezing. Using the command line-npm method, it did download but was not an option when in node-red. I found that this was because there were multiple location of the node-red files. The ones it was using was in a hidden directory in my root folder ( root/.node-red ). I then installed the file (again using the command line and npm method) and this worked.

  2. Arturo Bugarin December 08, 2017

    Hi Kpits! Amazing Work! Could you please sent me the code of node red? I have some doubts about how to use the flow, thanks 🙂

  3. Please Can I get the flows of Node-RED ?

  4. Hi, I’m a few years late.. but I am trying to complete a similar project. I have followed your instructions and I am able to get the camera working, photo taken in to my directed path, and it will continue to overwrite the photo as I take more pictures. I am having trouble incorporating the visual recognition aspect. I am not sure I am doing it correctly, is there any way I can get a more detailed view?

Join The Discussion