IBM Code Model Asset eXchange (MAX) is a one-stop place for developers to find and use free and open source deep learning models. Since the first release of MAX in early 2018, we have enabled many data scientists and AI developers to easily discover, rate, train, and deploy machine learning and deep learning models in their AI applications. To continue this effort, we are pleased to announce our second batch of model assets along with code patterns that help developers with some handy examples.
These new model assets cover ML/DL areas including audio, image, text recognition, and cancer detection:
- Breast Cancer Mitosis Detector
This model, trained by the IBM CODAIT team on the TUPAC16 auxiliary mitosis dataset, detects the presence of mitoses in breast cancer tumor cells. It is part of a larger model pipeline used to predict tumor proliferation scores on whole-slide images of biopsied tissue.
Possible use case: Automating the diagnosis of cancer severity levels from tumor images to assist with treatment decisions.
- Audio Embedding Generator
This model extracts features from audio data, in the form of embedding vectors, that allow data scientists to build machine learning models that take sounds as input.
Possible use case: The audio embeddings can be used as input for other tasks, such as audio classification or de-duplication.
- Image Segmenter
This model identifies objects in images and also generates a segmentation map containing a predicted object class for each pixel in the image.
Possible use case: Real-time labeling of humans or objects from a video stream.
- Word Embedding Generator
Possible use case: The embeddings can be used as input for other NLP tasks, such as sentiment analysis, text summarization, etc.
- Image Colorizer
This model, trained by the IBM CODAIT team, converts grayscale images to color.
Possible use case: Add colors to old black and white photos.
- Audio Classifier
This model uses the extracted audio features from the Audio Embedding Generator to build a classifier for short sound clips.
Possible use case: A safety application capable of detecting gunshots, break-ins, crashes or other important events from their sounds, that can improve the response time of emergency services.
IBM CODAIT team also created a code pattern to illustrate how to train and evaluate the audio classifier model and use it to classify audio embeddings on Deep Learning as a Service platform of IBM Watson Studio.
In addition to the new model assets, we also enhanced some of the existing model assets:
- The following model assets are now trainable on Fabric for Deep Leaning (FfDL), an open source project for deep learning fabric offering frameworks like TensorFlow, PyTorch, Caffe, and others as a service on Kubernetes.
- To demonstrate how to use a MAX model in an application, we published a code pattern that shows how simple it can be to create a web app that utilizes a MAX model. The app wraps the Image Caption Generator from MAX in a simple web UI that lets you filter images based on the descriptions given by the model.
- We’ve also included a mini web app with the MAX Object Detector model that allows users to upload an image, detect objects within the image, and visually interact with the model output. The web app is deployed alongside the model API with no extra steps required from users. We plan to expand on it in the future to create a stand-alone web app demonstrating how to use the model in an application.
Along with these new model assets and code patterns, IBM CODAIT FfDL team published 2 brand new patterns on the usage of IBM Fabric for Deep Learning.
- Use a Jupyter notebook to integrate the Adversarial Robustness Toolbox into a neural network model training pipeline to find model vulnerabilities leveraging Fabric for Deep Learning
- Leverage Tensorflow and Fabric for Deep Learning to train and deploy Fashion MNIST model on Kubernetes
Visit the IBM Code Model Asset eXchange (MAX) site and browse through these models and enhancements. We hope you can find something that is right for your AI development needs. We also welcome your comments and suggestions that help us improve and better serve the ML/DL community.