DeployableImage-to-Image Translation or Transformation
Get this modelTry the API
By IBM Developer Staff | Updated September 21, 2018 - Published March 20, 2018
Artificial intelligenceVisionImage-to-Image Translation or Transformation
This model generates a new image that mixes the content of an input image with the style of another image. The model consists of a deep feed-forward convolutional net using a ResNet architecture, trained with a perceptual loss function between a dataset of content images and a given style image. The model was trained on the COCO 2014 data set and 4 different style images. The input to the model is an image, and the output is a stylized image. The model is based on the Pytorch Fast Neural Style Transfer Example.
This model can be deployed using the following mechanisms:
docker run -it -p 5000:5000 codait/max-fast-neural-style-transfer
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Fast-Neural-Style-Transfer/master/max-fast-neural-style-transfer.yaml
Once deployed, you can test the model from the command line. For example:
curl -F "image=@assets/bridge.jpg" -XPOST http://localhost:5000/model/predict?model=udnie > result.jpg && open result.jpg
May 29, 2019
Detect faces in an image and predict the emotional state of each person
View model »
Process image, video, audio, or text data using deep learning models from the Model Asset Exchange in Node-RED flows.
Artificial intelligenceDeep Learning+
Back to top