This brief tutorial walks you through how to launch a simple NGINX web server container from a Dockerfile using the Red Hat OpenShift Container Platform (OCP) on the IBM LinuxONE Community Cloud. This tutorial is aimed at folks who may be new to OCP and are interested in a basic demonstration of how it works. In the process, I will point out specific things to note about both LinuxONE and OCP on the LinuxONE Community Cloud.
To complete this tutorial, you need an OCP account on the IBM LinuxONE Community Cloud.
It should take you about 30 minutes to complete this tutorial.
If you haven’t already done so, your first step is to sign up for an account. Also note that the LinuxONE Community Cloud hosts multiple services and is a shared environment with other community members, so resource quotas are in place. Part of this tutorial shows you how to set the quota for your container deployment.
Once you have an account, feel free to log in and look around. When you’re ready, go to the drop-down menu at the top left and select Developer.
Tip: With this environment, you have the use of one project, which is automatically created for you when your account is created. All of your workloads will run within this project.
After you select Developer, you are directed to the Topology page, which shouldn’t have any workloads running so it will present you with several options for adding content to your project.
You will be using From Dockerfile, so select that tile.
Let’s pause for a moment here to talk about architectures. As you may know, LinuxONE is the s390x or “IBM Z” architecture. This means you can’t just build a container locally on your laptop and upload it to OCP. You either have to use an s390x environment (like an s390x virtual machine, also available from the LinuxONE Community Cloud) or you have to use OCP itself to build your image. In this tutorial, the “From Dockerfile” option includes the build step, so you will have a custom container image built for you!
Tip: Looking for pre-built containers? Docker Hub has thousands for s390x! When you search for an image, be sure to select the IBM Z checkbox under Architectures. And beware — this is a public repository of images, so make sure you trust the author and do a security review of the image before using it.
Getting back to our tutorial, once you click From Dockerfile you are brought to a screen titled “Import from Dockerfile.” This is where the magic happens!
We have prepared a repository with a simple Dockerfile and some basic configuration files at https://github.com/IBM/nginx-linuxone. Feel free to look around this repo, and if you want to make changes, like updating the text in the index.html that will be deployed, create a public fork of the repository and use your fork in the next step. Note that the configuration changes made were done so that NGINX does not run as root, which is restricted by OCP by default.
Many of the fields on the “Import from Dockerfile” screen can be left as the default, but we will need to fill in and change a few things:
- Add https://github.com/IBM/nginx-linuxone (or your modified fork of the code) to the Git Repo URL.
- Fill in the Application Name with a name of your choosing. This tutorial will be using “nginx-linuxone-app.”
- Fill in the Name with another name of your choosing. This tutorial will be using “nginx-linuxone.”
Now scroll down to Advanced Options and click on Resource Limits to add the following:
Request: 200 millicores Limit: 300 millicores
And then under Memory:
Request: 128 Mi Limit: 256 Mi
These are required since you’re working in an environment with quotas.
Finally, click Create!
Tip: Did you launch any workloads before this tutorial that are still running? If so, you may bump into quota limits. In addition to the resource limits noted above for your deployment, the build step also has resource limits. In this environment, default quotas for builds are set in a global build configuration template, but if you run into trouble, you can edit your build configuration manually to adjust the limits.
You will be brought back to the Topology screen, but this time you should see a your application! Click on it to be brought to a resources page where you will be able to see the status of your build and ultimately your deployment.
Once the build is complete and your pod has been launched, you can scroll down to Routes and click on the Location link to see your website up and running!
Congratulations, you have launched your first workload on OpenShift!
From here, I recommend exploring the interface to start learning how this works. Since this is a small deployment with just a single container and single application, you may find this is a good way to build your understanding. Look into build and deployment configurations, explore rules in place to handle network routing, and more. Then do some experiments and see what else you can run!