Review Text Generator

Get this modelTry the API

Overview

This model generates English-language text similar to the text in the Yelp® review data set. The model consists of a recurrent neural network with 2 LSTM layers that was trained on the Yelp® reviews data. The input to the model is a piece of text used to seed the generative model, and the output is a piece of generated text. The model is based on the IBM Code Pattern: Training a Deep Learning Language Model Using Keras and Tensorflow.

Model Metadata

Domain Application Industry Framework Training Data Input Data Format
Text/NLP Language Modeling General Keras Kaggle Yelp Reviews Dataset Text

References

Licenses

Component License Link
Model GitHub Repository Apache 2.0 LICENSE
Model Weights Apache 2.0 LICENSE
Test Assets Custom Asset README

Options available for deploying this model

This model can be deployed using the following mechanisms:

  • Deploy from Dockerhub:

    docker run -it -p 5000:5000 codait/max-review-text-generator
    
  • Deploy on Red Hat OpenShift:

    Follow the instructions for the OpenShift web console or the OpenShift Container Platform CLI in this tutorial and specify codait/max-review-text-generator as the image name.

  • Deploy on Kubernetes:

    kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Review-Text-Generator/master/max-review-text-generator.yaml
    
  • Locally: follow the instructions in the model README on GitHub

Example Usage

Once deployed, you can test the model from the command line. For example:

curl -X POST --header 'Content-Type: application/json' -d '{"seed_text": "heart be still i loved this place. way better than i expected. i had the spicy noodles and they were delicious, flavor great and quality was on point. for desert the sticky rice with mango, i dream about it now. highly recommend if you are in the mood for "}' 'http://localhost:5000/model/predict'

You should see a JSON response that looks something like that below. Note, however, that since the character generation step uses random sampling, you should expect to get different results in the generated_text field in your response.

{
  "status": "ok",
  "prediction": {
    "seed_text": "heart be still i loved this place. way better than i expected. i had the spicy noodles and they were delicious, flavor great and quality was on point. for desert the sticky rice with mango, i dream about it now. highly recommend if you are in the mood for ",
    "generated_text": "made to make the coffee is friendly food in breads is delicy dep much to spice good, we went and bee",
    "full_text": "heart be still i loved this place. way better than i expected. i had the spicy noodles and they were delicious, flavor great and quality was on point. for desert the sticky rice with mango, i dream about it now. highly recommend if you are in the mood for made to make the coffee is friendly food in breads is delicy dep much to spice good, we went and bee"
  }
}