We’re giving away 1,500 more DJI Tello drones. Enter to win ›
DeployableImage-to-Image Translation or Transformation
Get this modelTry the API
By IBM Developer Staff | Published December 12, 2018
Artificial intelligenceDeep learningVisionFacial RecognitionImage-to-Image Translation or Transformation
This model fills in missing or corrupted parts of an image. This model uses Deep Convolutional Generative Adversarial Networks (DCGAN) to fill the missing regions in an image. The model is trained using celebA dataset and works best for completing corrupted portions in human face. Input to the model is an image with corrupted face. OpenFace face recognition tool will detect and extract the corrupted face from the input image. This extracted face is then given to OpenFace alignment tool where it is aligned (inner eyes with bottom lip) and resized (64 x 64) producing an output that can be used by the model to complete the corrupted portions. The output is a collage of 20 images, in a 4×5 grid, representing the intermediate results and final completed image (bottom-right). The model is based on the Tensorflow implementation of DCGAN.
This model can be deployed using the following mechanisms:
docker run -it -p 5000:5000 codait/max-image-completer
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Image-Completer/master/max-image-completer.yaml
Once deployed, you can test the model from the command line. For example:
curl -F "file=@assets/input/test_image.jpg" -XPOST http://localhost:5000/model/predict?mask_type=left -o result.jpg
Acceptable image types are png, jpg and jpeg. Four different mask options are provided and the selected mask will be applied on the image before proceeding to completion process. The available mask_type options are random, center, left and grid.
May 10, 2019
Artificial intelligenceDeep learning+
May 18, 2019
Back to top