Audio Sample Generator

Get this modelTry the API

Overview

This model generates short samples based on an existing dataset of audio clips. It maps the sample space of the input data and generates audio clips that are “inbetween” or “combinations” of the dominant features of the sounds. The model architecture is a generative adversarial neural network, trained by the IBM CODAIT Team on lo-fi instrumental music tracks from the Free Music Archive and short spoken commands from the Speech Commands Dataset. The model can generate 1.5 second audio samples of the words up, down, left, right, stop, go, as well as lo-fi instrumental music. The model is based on the WaveGAN Model.

Model Metadata

Domain Application Industry Framework Training Data Input Data Format
Audio Audio Modeling General TensorFlow Speech Commands & FMA tracks WAV Audio Files

References

Licenses

Component License Link
Model GitHub Repository Apache 2.0 LICENSE
Model Weights Apache 2.0 LICENSE
Model Code (3rd party) MIT LICENSE

Options available for deploying this model

This model can be deployed using the following mechanisms:

  • Deploy from Dockerhub:
    docker run -it -p 5000:5000 codait/max-audio-sample-generator
    
  • Deploy on Kubernetes:
    kubectl apply -f https://raw.githubusercontent.com/IBM//master/max-audio-sample-generator.yaml
    
  • Locally: follow the instructions in the model README on GitHub

Example Usage

Once deployed, you can test the model from the command line. For example, the following command will generate a sample from the default model (lo-fi instrumental music):

$ curl -X GET 'http://localhost:5000/model/predict' -H 'accept: audio/wav' > result.wav

This will save the resulting audio file to result.wav, which you can then open in the audio player of your choice.