Expanding the reach of the IBM Model Asset eXchange
Learn about a new batch of models, encompassing audio, natural language processing, and image recognition.
The IBM Model Asset eXchange launched in March 2018 with the goal of enabling developers and data scientists to more easily discover and use state-of-the-art, free and open source deep-learning models in their applications. This is achieved through careful vetting of code and licensing details, together with the creation of standardized microservices that are deployable in a matter of seconds with Docker or Kubernetes (see this blog post for a deep dive into the process behind wrapping models for the Model Asset eXchange).
Year in review
In the past year, the exchange has grown to almost 30 models across a variety of common problem domains ranging from image recognition to natural language processing to time-series analysis. Together with the models themselves, we have focused on creating rich content to help you use these models, whether you’re just getting started with deep-learning capabilities in your applications or are already experienced. This content now includes:
- A series of articles and tutorials that walks through the basics of getting started with the Model Asset eXchange and how to deploy and consume its model microservices
- IBM Developer Code Patterns that illustrate how to integrate the exchange model microservices into an application
- A variety of blog posts ranging from getting started to creating art to teaching a robot new tricks with the Model Asset eXchange
- Conference presentations, booth demos, webcasts, meetups and workshops related to the Model Asset eXchange
In addition, because everything to do with the Model Asset eXchange is open, we’ve published our base Docker image, Model Asset eXchange starter project, and Model Asset eXchange Framework repos on GitHub for the wider community to use, comment on, and improve upon.
It’s been a busy year!
Now, just as a child often begins to walk at this age and explore a wider world, the Model Asset eXchange is widening its horizons. Together with a batch of new models, the CODAIT team has been hard at work enabling new deployment mechanisms for the exchange at the edge via the new module for Node-RED, which allows you to use Model Asset eXchange model microservices in your Node-RED flows by simply dragging and dropping – no code required! To illustrate this functionality, we have created a detailed tutorial.
We’re always working on adding new and useful models to the exchange. As mentioned, we’re pleased to announce a new batch of models encompassing audio, natural language processing, and image recognition:
- Text Sentiment Classifier — This model detects whether the overall sentiment of short pieces of text is
negative. Possible use case: Analysis of sentiment for a given topic on social media.
- Image Resolution Enhancer — Using this model, a blurry low-resolution image can be upscaled by four times, while generating realistic detail that is missing from the original image. Possible use case: Generating higher-quality zoomed-in product images for online shopping sites.
- Facial Emotion Classifier — Predicts the emotion of people in an image. The model detects and locates faces in the image, then predicts the emotion of each face from a set of eight states:
contempt. Possible use case: Automating analysis of audience reactions in various situations.
- Speech-to-Text-Converter — Recognizes speech and converts it to text form (currently in English only). Possible use case: Automated generation of meeting minutes.
In 2019, we’re excited to continue our mission to bring state-of-the-art deep-learning models to the developer community in ways that make this technology easier to deploy and consume. At the same time, for data scientists, we’re working on making these models easier to customize and apply to your own data in a standardized and frictionless manner. We’re also always thinking about ways to further standardize and modularize the way Model Asset eXchange model microservices are put together, in order to make it easier to deploy and consume these models across the platforms most used by developers — edge and IoT devices, the browser, mobile, cloud services, or your own Kubernetes cluster. Finally, the team will be out in the community, physically at events and virtually online, talking about and demonstrating the work we’re doing with the Model Asset eXchange.
Visit the IBM Model Asset eXchange and check out all the latest models and enhancements. We trust you will find the right solution for your AI development use cases — and if you don’t, let us know what you need and let’s work together to bring it to you and other developers! As always, we welcome your comments and suggestions that help us improve and better serve the ML/DL community. Reach out on GitHub or join the Model Asset eXchange Slack channel.