IBM Developer Day | Bengaluru | March 14th Register now
DeployableAudio Modeling
Get this modelTry the API
By IBM Developer Staff | Published September 21, 2018
Artificial IntelligenceAudio Modeling
This model generates short samples based on an existing dataset of audio clips. It maps the sample space of the input data and generates audio clips that are “inbetween” or “combinations” of the dominant features of the sounds. The model architecture is a generative adversarial neural network, trained by the IBM CODAIT Team on lo-fi instrumental music tracks from the Free Music Archive and short spoken commands from the Speech Commands Dataset. The model can generate 1.5 second audio samples of the words up, down, left, right, stop, go, as well as lo-fi instrumental music. The model is based on the WaveGAN Model.
up
down
left
right
stop
go
This model can be deployed using the following mechanisms:
docker run -it -p 5000:5000 codait/max-audio-sample-generator
kubectl apply -f https://raw.githubusercontent.com/IBM//master/max-audio-sample-generator.yaml
Once deployed, you can test the model from the command line. For example, the following command will generate a sample from the default model (lo-fi instrumental music):
curl -X GET 'http://localhost:5000/model/predict' -H 'accept: audio/wav' > result.wav
This will save the resulting audio file to result.wav, which you can then open in the audio player of your choice.
result.wav
Use computer vision, TensorFlow, and Keras for image classification and processing.
Artificial Intelligence
Where do you want to eat? Let WatsonEats help.
Artificial IntelligenceData Science+
Back to top