We’re giving away 1,500 DJI Tello drones. Enter to win ›
By Rajesh K. Jeyapaul | Published May 21, 2018 - Updated May 21, 2018
Artificial IntelligenceContainersDeep Learning
The flexibility of containers can really open up the limits for any developer, which is why it’s not going anywhere anytime soon. In this how-to, you’ll learn how H20’s Driverless AI can open up artificial intelligence on a corporate level. This means that if you’re an enterprise developer, you can utilize machine learning to make decisions on your behalf and automating feature engineering. We’ll be using this artificial intelligence container image to show you how easy it is to deploy, so that you can play around with its features. With Driverless AI, you’ll be able to run thousands of iterations to drive utilization, all while achieving the best possible performance. Please note that this how-to guide uses Windows as an example.
This guide shows you how to deploy “Driverless AI,” a container image for a Deep Learning (DL) framework, onto the Kubernetes cluster that is provisioned through IBM Cloud. You will perform the following steps.
IBM Cloud-based Kubernetes clusters make it easy to quickly create the registry and publish the image. Also, developers can try it out with various cluster strategies like Green-Blue and play around the cluster architecture.
These steps will take you approximately 60 minutes. It takes 15 minutes to configure Docker and 20 minutes to deploy the cluster.
I suggest you kickstart the Docker download and proceed to cluster deployment. Both can be completed in 15 to 20 minutes if they are done in parallel. It takes 20 minutess to publish the deployed image to the cluster. The rest of the steps are straightforward.
This how-to has four major installation and configuration steps:
Install the container image. (This takes several minutes to complete.)
docker load -i .\driverless-ai-docker-runtime-rel-X.Y.Z.gz
Create the required folder for the driver to function:
Here, we run Docker via the Windows command. (For other operating systems, please refer back to the Developer Desktops for more instructions.)
docker run --rm -p 12345:12345 -p 54321:54321 -p 9090:9090 -v c:/path_to_data:/data -v c:/path_to_log:/log -v c:/path_to_license:/license -v c:/path_to_tmp:/tmp opsh2oai/h2oai-runtime
For reference, here’s the non-Windows command.
docker run --rm -u id -u:id -g -p 12345:12345 -p 9090:9090 -v pwd/data:/data -v pwd/log:/log -v pwd/license:/license -v pwd/tmp:/tmp opsh2oai/h2oai-runtime
Connect to Driverless AI with your browser by opening http://localhost:12345
Load any dataset (ex: kaggle link) and validate the H2O.ai driver.
Log in to IBM Cloud with your IBM ID (set up a new account if you don’t have one).
Go to the catalog page and select Containers.
Create a cluster by clicking the Containers in Kubernetes clusters icon.
NOTE: It takes around 20 minutes to deploy. 2 CPU and 4GB RAM are automatically provisioned. If you need more than the default 2CPU and 4GB RAM, then deploy the Standard cluster.
Access your cluster. (You need to have IBM Cloud CLI and Kubernetes CLI to access the cluster. Instructions are available on the Access page.)
Verify the cluster by using the IBM Cloud CLI command:
bx cs clusters
Name ID State Created Workers Location Version
mycluster 31ad1e21d25742c7af98f70fxxxxyyxy normal 29 minutes ago 1 lon02 1.9.3_1502
Verify the cluster deployment by using the Kubernetes CLI command:
$ kubectl describe nodes
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
default h2oai-6695d7b455-677gz 2 (50%) 2 (50%) 6Gi (40%) 6Gi (40%)
At this stage, your cluster is ready and you are good to have a private registry created to host a container image. Proceed to the next step of publishing the local Docker image onto the cluster.
Here are the steps to publishing your local Docker image:
Tag the locally built Docker image by using IBM Cloud’s Container Registry, by following the steps below:
Create your namespace and tag the image.
Use this naming convention:
docker image tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
In my example, it was:
docker image tag opsh2oai/h2oai-runtime registry.eu-gb.bluemix.net/rajeshyahoo/h2oai
Validate by using the docker images command.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.eu-gb.bluemix.net/rajeshyahoo/h2oai latest 0c7621568523 3weeks ago 5.51GB
Now that you created your namespace and tagged the image, it is ready to publish to the cluster registry.
Push the image to the IBM Cloud registry to your namespace.
docker image push registry.eu-gb.bluemix.net/rajeshyahoo/h2oai
NOTE: This command may take more than 30 minutes since around 2GB of image are being pushed to the cluster registry.
NOTE: Run bx cr login if it fails with an authentication error.
bx cr login
Now validate the image push by running bx cr images. (This lists the images.)
bx cr images
Now let’s run, configure, and deploy.
kubectl run h2oai --image=registry.eu-gb.bluemix.net/rajeshyahoo/h2oai
kubectl get pods
At this stage, a pod is created in the Kubernetes cluster.
kubectl describe nodes
Use the container shell to create a /data folder.
kubectl exec -it h2oai-6695d7b455-677gz -- /bin/bash
And then create a data folder.
Deploy by running the following commands.
kubectl expose deployment/h2oai --type=NodePort --name=h2oai-service -- port=12345
You can use the kubectl describe service command to find more information about the service.
kubectl describe service
kubectl describe service h2oai-service
NOTE: Take note of the NodePort that’s listed from the above command. By using the cluster IP address and the nodeport, you can access the container image.
Access the cluster IP address.
$ bx cs workers mycluster
ID Public IP Private IP Machine Type State Status Zone Version
kube-lon02-cr31ad1e21d25742c7af98f70f59423d60-w1 xx.yy.238.155 10.165.58.241 b2c.4x16.encrypted normal Ready lon02 1.9.3_1502
Access the image by using the public IP that is assigned to the cluster and the NodePort. For example, the image below is accessing http://xx.yy.238.155:32480.
Congrats! If you are able to access the H2O.ai driver page at this point, then your configuration is complete. You can proceed to evaluate the model.
In this how-to guide, you learned how to:
After you evaluate your model, you might ask “what now?” You can discover CODAIT to further enhance your AI capabilities in the enterprise environment. Or if you’re interested in more code on AI, check out our AI Patterns and get coding.
April 4, 2019
November 22, 2018
October 26, 2018
Back to top