Time is running out! Call for Code submissions due July 29 ›
Get this modelTry the API Try in a Node-RED flow
By IBM Developer Staff | Published March 15, 2019
Artificial intelligenceDeep learningVisual recognitionFacial RecognitionImage Classification
This model first detects faces in an input image. Then, each face is passed to the emotion classification model which predicts the emotional state of the human, from a set of 8 emotion classes: neutral, happiness, surprise, sadness, anger, disgust, fear, contempt. The output of the model is a set of bounding box coordinates and predicted probabilities for each of the emotion classes, for each face detected in the image. The format of the bounding box coordinates is [ymin, xmin, ymax, xmax], where each coordinate is normalized by the appropriate image dimension (height for y or width for x). Each coordinate is therefore in the range [0, 1].
[ymin, xmin, ymax, xmax]
The model is based on the Emotion FER+ ONNX Model Repo.
This model can be deployed using the following mechanisms:
docker run -it -p 5000:5000 codait/max-facial-emotion-classifier
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Facial-Emotion-Classifier/master/max-facial-emotion-classifier.yaml
You can test or use this model
Once deployed, you can test the model from the command line. For example:
$ curl -F "image=@assets/happy-baby.jpeg" -XPOST http://localhost:5000/model/predict
You should see a JSON response like that below:
Complete the node-red-contrib-model-asset-exchange module setup instructions and import the facial-emotion-classifier getting started flow.
Get an overview of computer vision with deep learning and learn how it can help your applications recognize what an…
Artificial intelligenceDeep learning+
This learning path gives you an understanding and working knowledge of Watson Visual Recognition. It explains the basics of Visual…
This article walks you through the basics of the Watson Visual Recognition service, such as how to get credentials and…
Back to top