Learn more >
John Walicki, Brian Innes | Published May 21, 2019
In this tutorial, we present the high-level steps from our workshop where we show you how to program a Ryze DJI Tello Drone by using the Ryze and DJI APIs in their SDKs and Node-RED. Join the open source community, like in the TelloPilots forums, and learn how to program the Tello.
To classify the pictures that your drone takes, you’ll need to create a Watson Visual Recognition service instance in IBM Cloud. So, you’ll need to create an IBM Cloud account.
You need to have experience using Node-RED, which is an open source, low-code visual programming environment.
To control the drone, you need to connect your computer to the drone wifi access point. So, you need to install and run Node-RED locally, rather than in the cloud.
After you’ve installed Node-RED, you need to install several additional nodes that we will use in this tutorial.
Remember: Be careful when when you fly your drone. Fly your drone indoors at your own risk. Also, be respectful of your neighbors and public property when flying your drone outdoors. When you record video and take pictures with your drone, be mindful of other people’s privacy. Obey all FAA regulatory restrictions posted about UAV flight prohibitions.
Completing this tutorial should take about 60 minutes.
You’ll use the Tello drone’s APIs to send commands to the drone in your Node-RED app.
Open Node-RED and import the commands starter flow.
Add inject and change nodes to enable the following drone commands:
When you need to pass a parameter for a command, make sure that you use the property msg.payload.tellovalue so that the format output message node will generate the correct command to send to the drone.
format output message node
Check your work by importing and reviewing the solution flow.
In Node-RED, import the dashboard starter flow.
Add button nodes for the drone commands in step 1.
Launch the Node-RED Dashboard by selecting the Dashboard tab in the right pane, and then click the launch button.
In this step, you create a Node-RED Dashboard with gauges and charts to display the telemetry data from the drone.
To receive telemetry data from the Tello drone, you need to open port 8890/udp, which is where the drone sends the telemetry data. If you need detailed steps, see the instructions from the workshop. A change node in our Node-RED flow will parse this data into a JSON object.
In Node-RED, import the telemetry data starter flow.
Add gauge nodes to display this telemetry data:
Add a Chart node to display the Drone Height telemetry data.
Add a Text node to display the Flight Time telemetry data.
The dashboard should look like this:
One of the great features of Node-RED is the capability to create subflows. A subflow acts like a function in other programming languages. You can capture an existing flow, or part of a flow, into a subflow, or you can create a subflow from scratch. The subflows appear in the palette and look like any other nodes in the palette. You can then drag the subflow nodes onto the palette to run the flow within the subflow.
In Node-RED, import the missions starter flow. This flow provides you with all of the subflow nodes for this workshop.
Create a patrol mission for your drone, and include the following tasks:
Make sure that you add sufficient delays into your solution to allow the drone to complete all the moves.
Add a new group to the dashboard, and then create a button to allow you to start the mission from the dashboard.
The Tello has a low-level protocol that allows you to take pictures. The solution flow sends the take picture command and receives and reconstructs the picture data from the Tello.
Import and review the solution flow.
Launch the Node-RED Dashboard.
In the previous step, the flow simply took pictures with your drone. In this step, we add Watson Visual Recognition node to the flow and have it classify what is in the image.
But, before we can work on the flow, we have to create a Watson Visual Recognition service instance.
Use an ethernet cable to hardwire your laptop to your network so that it can reach the internet. Connect over wifi to the Tello drone.
Log in to IBM Cloud.
Create a Watson Visual Recognition service.
Return to the IBM Cloud Resources dashboard, and click your Watson Visual Recognition instance.
Copy the Watson Visual Recognition API key to your clipboard.
We will use the previously installed prerequisite node-red-node-watson nodes that provide a collection of Node-RED nodes for IBM Watson services.
In Node-RED, import the classification starter flow.
Edit the visual recognition node (by double-clicking it).
Paste the API key from the clipboard into the node properties.
Launch the Node-RED Dashboard by clicking the Dashboard tab in the right pane, and click on the launch button. The dashboard shows the results of the image classification.
Now that your Tello drone is taking pictures and classifying images using the default classifier, let’s have the Tello drone fly, take pictures, and classify images using a custom classifier.
Follow the instructions in this tutorial to train a Watson Visual Recognition custom classifier. You might take drone images of balls in your yard, your dog playing in your yard, leaves in your gutters, bird nests in trees. Anything your Tello drone might see.
After training is completed, in the IBM Cloud dashboard, go to the Project Asset Visual Recognition Model Overview tab. Copy the Model ID to the clipboard.
Turn on your drone, and connect to the drone wifi.
In Node-RED, import the solution flow.
Manually click the TELLO_LOW_LEVEL_CONNECT inject node in the flow.
In the Tello Camera Node-RED Dashboard, paste the Model ID into the dashboard.
Click the button to start taking pictures every 10 seconds.
Click the Take-off button.
We hope you enjoyed learning how to program your Tello drone using their APIs and Node-RED visual programming. Give us feedback if you have suggestions on how to improve this tutorial.
Back to top