Win $20,000. Help build the future of education. Answer the call. Learn more

Fast Neural Style Transfer


This model generates a new image that mixes the content of an input image with the style of another image. The model consists of a deep feed-forward convolutional net using a ResNet architecture, trained with a perceptual loss function between a dataset of content images and a given style image. The model was trained on the COCO 2014 data set and 4 different style images. The input to the model is an image, and the output is a stylized image. The model is based on the Pytorch Fast Neural Style Transfer Example.

Model Metadata

Domain Application Industry Framework Training Data Input Data Format
Vision Style Transfer General Pytorch COCO 2014 Image (RGB/HWC)



Component License Link
Model GitHub Repository Apache 2.0 LICENSE
Model Weights BSD-3-Clause Pytorch Examples LICENSE
Model Code (3rd party) BSD-3-Clause Pytorch Examples LICENSE
Test Assets CC0 Samples README

Options available for deploying this model

This model can be deployed using the following mechanisms:

  • Deploy from Dockerhub:

    docker run -it -p 5000:5000 codait/max-fast-neural-style-transfer
  • Deploy on Red Hat OpenShift:

    Follow the instructions for the OpenShift web console or the OpenShift Container Platform CLI in this tutorial and specify codait/max-fast-neural-style-transfer as the image name.

  • Deploy on Kubernetes:

    kubectl apply -f

    A more elaborate tutorial on how to deploy this MAX model to production on IBM Cloud can be found here.

  • Locally: follow the instructions in the model README on GitHub

Example Usage

Once deployed, you can test the model from the command line. For example:

curl -F "image=@samples/bridge.jpg" -XPOST http://localhost:5000/model/predict?model=udnie > result.jpg && open result.jpg

Example Result

Resources and Contributions

If you are interested in contributing to the Model Asset Exchange project or have any queries, please follow the instructions here.