This article is part of the Learning path: Get started with Watson Machine Learning Accelerator series.
|An introduction to Watson Machine Learning Accelerator||Article|
|Accelerate your deep learning and machine learning||Article + notebook|
|Elastic Distributed Training in Watson Machine Learning Accelerator||Article + notebook|
|Expedite retail price prediction with Watson Machine Learning Accelerator hyperparameter optimization||Tutorial|
|Drive higher GPU utilization and throughput||Tutorial|
|Classify images with Watson Machine Learning Accelerator (Optional)||Article + notebook|
Watson Machine Learning Accelerator is an enterprise AI infrastructure to make deep learning and machine learning more accessible–bringing the benefits of AI to your business. This includes the complete life-cycle management from installation and configuration; data ingest and preparation; building, optimizing, and distributing the training model; to moving the model into production.
In this article, we use a Jupyter Notebook to provide an overview of data science experience in Watson Machine Learning Accelerator. We’ll also show you how increased productivity is possible with a set of robust tooling starting with data ingestion, hyper-parameter tuning, model training and inference.
The detailed steps for this article can be found in the associated Jupyter Notebook. Within this notebook, you’ll:
- Upload this notebook to your Watson ML Accelerator environment.
- Download the dataset and model.
- Import the dataset.
- Build the model.
- Tune the hyper-parameter.
- Run the training.
- Inspect the training run.
- Create an inference model.
- Test it out.
This article provided an overview of data science experience in Watson Machine Learning Accelerator and how it helps data scientist accelerating time to results and accuracy. The article is part of the Learning path: Get started with Watson Machine Learning Accelerator series. Continue the series with the next article, Elastic Distributed Training in Watson Machine Learning Accelerator.