Follow us on Medium  to stay up to date on Call for Code

Use machine learning and deep learning for your CFC submission


Welcome to the third installment in our Call For Code Technology mini-series where I identify and explore the core technology focus areas within Call For Code. You’ll learn about a technology, how to best use it on IBM Cloud™, and where to find the best resources to fuel your innovation. If you missed any of my other blog posts on building applications for Call for Code, check out my posts on AI, and blockchain.

First things first: If you haven’t already done so, accept the Call for Code challenge and join our community.

Here in Part 3, I talk about integrating machine learning into your Call for Code solution. As you’ll find, machine learning has some overlapping content and relevance to AI, but I’ll highlight the differences and similarities.

Machine learning explained

What is machine learning? I think the IBM machine learning page defines it simply, yet thoroughly. “Machine learning is a form of AI that enables a system to learn from data rather than through explicit programming.” What this means is that instead of sitting down and writing a program to handle data, for example, you would instead feed data to a model that learns how to interpret that data best. There are four main types of learning: supervised learning, unsupervised learning, reinforcement learning, and deep learning.

Supervised learning is when you feed a model data that is already classified or labeled data. We know the data being fed to the model, so we’re just looking for a pattern within that data. Unsupervised learning is the opposite, where we don’t have labels for that data but we’re looking to understand what that data is and if we can find any patterns from it.

Reinforcement learning is different, in that it’s a behavioral learning model. The model improves based on feedback that is learned during training and through trial and error from data analysis. Deep learning is similar to reinforcement learning, but is far more complex. It uses a set of neural networks to understand patterns in complicated, unstructured data. They are modeled after how our brains understand data, which is very hard to train a computer to think and analyze as humans do.

The power of machine learning on IBM Cloud

Not only can you work with machine learning on IBM Cloud, but we have an entire suite of machine learning/deep learning services that you can use within Watson™ Studio. Watson Studio is a platform for building and training machine learning models as well as preparing and analyzing data — all in a flexible hybrid cloud environment. Watson Studio also provides tools for data scientists, application developers, and subject matter experts to collaborate and easily work with data to build and train models at scale.

Watson Studio provides you with the environment and tools to solve problems by collaboratively working with data. You can choose the tools that you need to analyze and visualize data, to cleanse and shape data, to ingest streaming data, or to create and train machine learning models. For more information on Watson Studio, check out this link that provides an overview and a video on how to get started.

Get a quick-start intro to machine learning and AI with the full three-part video series on IBM Developer where you’ll learn how to use IBM Watson Studio.

Getting started with machine learning

If you don’t already have an IBM Cloud account, the first step is to sign up, which takes less than two minutes. Just be sure to use a valid email address because you must confirm your email address before you can create any services.

If you want to check out our IBM Watson Machine Learning (WML) offering, you can find it in our IBM Cloud catalog here. WML works with a wide variety of machine learning frameworks, including TensorFlow, Keras, Caffe, PyTorch, and more. You can work with it in numerous ways via command line, a Python application, or an API. Best of all, it integrates with Watson Studio!

An IBM code pattern that uses machine learning to help people analyzes synthesized health data and predicts if a patient has diabetes based on this data. This could be adapted to medical data taken after a natural disaster to determine if anyone affected might have a disease brought on by that event.

This video on predicting wildfire intensity using NASA satellite data and machine learning is another example of how to help reduce the risks of a natural disaster by alerting people in at-risk area.

This week, you learned about machine learning and how to implement it with the IBM Cloud and the provided code patterns. I’ll be back soon with Part 5 to talk about traffic and weather data that can be incorporated into your Call for Code 2019 submission.

In the meantime, follow me on Twitter or see my work in GitHub.

Tom Markiewicz contributed to this blog.

Additional resources: