Facial Recognizer

Get this modelTry the API


The model detects faces in an input image and then generates an embedding vector for each face. The generated embeddings can be used for downstream tasks such as classification, clustering, verification etc. The model accepts an image as input and returns the bounding box coordinates, probability and embedding vector for each face detected in the image. The model is based on the the FaceNet model.

Model Metadata

Domain Application Industry Framework Training Data Input Data Format
Vision Facial Recognition Multi Tensorflow VGGFace2 Image File



Component License Link
Model Github Repository Apache 2.0 LICENSE
Model Weights MIT LICENSE
Model Code (3rd party) MIT LICENSE
Test assets Various Asset README

Options available for deploying this model

This model can be deployed using the following mechanisms:

  • Deploy from Dockerhub:
    docker run -it -p 5000:5000 codait/max-facial-recognizer
  • Deploy on Kubernetes:
    kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Facial-Recognizer/master/max-facial-recognizer.yaml
  • Locally: follow the instructions in the model README on GitHub

Example Usage

Once deployed, you can test the model from the command line. For example if running locally:

$ curl -F "image=@assets/Lenna.jpg" -XPOST http://localhost:5000/model/predict

You should see a JSON response like that below:

  "status": "ok",
  "predictions": [
      "detection_box": [
      "probability": 0.9959015250205994,
      "embedding": [