Fast Neural Style Transfer

Get this modelTry the API

Overview

This model generates a new image that mixes the content of an input image with the style of another image. The model consists of a deep feed-forward convolutional net using a ResNet architecture, trained with a perceptual loss function between a dataset of content images and a given style image. The model was trained on the COCO 2014 data set and 4 different style images. The input to the model is an image, and the output is a stylized image. The model is based on the Pytorch Fast Neural Style Transfer Example.

Model Metadata

Domain Application Industry Framework Training Data Input Data Format
Vision Style Transfer General Pytorch COCO 2014 Image (RGB/HWC)

References

Licenses

Component License Link
Model GitHub Repository Apache 2.0 LICENSE
Model Weights BSD-3-Clause Pytorch Examples LICENSE
Model Code (3rd party) BSD-3-Clause Pytorch Examples LICENSE
Test Assets CC0 Asset README

Options available for deploying this model

This model can be deployed using the following mechanisms:

  • Deploy from Dockerhub:
    docker run -it -p 5000:5000 codait/max-fast-neural-style-transfer
    
  • Deploy on Kubernetes:
    kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Fast-Neural-Style-Transfer/master/max-fast-neural-style-transfer.yaml
    
  • Locally: follow the instructions in the model README on GitHub

Example Usage

Once deployed, you can test the model from the command line. For example:

curl -F "image=@assets/bridge.jpg" -XPOST http://localhost:5000/model/predict?model=udnie > result.jpg && open result.jpg

Example Result