We’re giving away 1,500 more DJI Tello drones. Enter to win ›
Get the code
View the demo
Try the app
by Va Barbosa, Sanjeev Ghimire | Published April 22, 2019
Artificial intelligenceDeep learningMachine learningCloud
This developer code pattern demonstrates how you can create your own music based on your arm movements in front of a webcam. It uses the Model Asset eXchange (MAX) Human Pose Estimator model and TensorFlow.js.
This code pattern is based on Veremin, but modified to use the Human Pose Estimator model from the Model Asset eXchange (MAX). The Human Pose Estimator model is converted to the TensorFlow.js web-friendly format. It is a deep learning model trained to detect humans and their poses in a given image.
The web application attaches video from your web camera, and the Human Pose Estimator model predicts the location of your wrists within the video. The application takes the predictions and converts them to tones in the browser or to MIDI values, which get sent to a connected MIDI device.
Get detailed instructions on using this pattern in the README.
Artificial intelligenceMAX - Model Asset eXchange
The Model Asset Exchange is place for developers to find and use free and open source deep learning models. Complete…
Artificial intelligenceData science+
Deploy deep learning models as a microservice and consume them in your applications or services.
Back to top