Taxonomy Icon

Containers

In Part 2, we showed how to build plugins for Spigot servers in Eclipse and how to test them locally in your Docker installation. Now we’re ready for the next big step—taking the Spigot server that you just developed and deploying it into the cloud in IBM Cloud.

To help you understand how to do that, we need to start with a description of how Docker files are run inside IBM Cloud—and that requires a bit of explanation about the technology that the IBM Cloud Kubernetes Service is built on: Kubernetes.

What is Kubernetes?

In Parts 1 and 2, you saw how Docker provides an environment for running containers. But sometimes (in fact, often), lone containers aren’t enough. When you are implementing more complex applications that are made up of multiple parts—such as web servers, application servers, and databases—you need a way to specify how these parts are related to each other. What’s more, you need a way to deploy them together as a comprehensive application.

The container industry calls this orchestration—the ability to coordinate the deployment and management of several containers together as a unified whole. What’s more, there are other issues around running containers as part of a production application that we have not considered. These include questions of scaling (how many copies of a particular server should you run), load balancing, networking, and more. Kubernetes addresses all of these issues and more. In order to understand just a bit of what it can do, we need to define some of the terms used associated with Kubernetes.

Clusters, pods, services, and deployments

The first Kubernetes concept we need to introduce is the idea of a cluster. A cluster in Kubernetes is a set of machines, or virtual machines, that are all running different parts of Kubernetes. The cluster is split up into a master node that handles the administration of the cluster, and one or more worker nodes that actually run your applications. All of these different nodes run Docker; remember, this is a solution for running Docker across more than one machine!

The reason we are doing all of this is because we want to create pods. A pod is a set of one or more containers that are logically connected that you want to run together. (Why is it called a “pod?” A group of whales is referred to as a pod—Docker’s logo images is a whale. All of the whales in a Pod stay together and swim in the same direction to the same destination.) For our purposes, the containers in the pod are connected together through things like a shared IP address and even shared storage.

The last concept you should know before we dive into the code is a deployment. Think of a deployment as a file that describes how to create pods and how to link those pods to services and ReplicaSets, and set their state. A service is a set of one or more pods that form a cohesive logical unit. A service definition decouples what the client of a particular service (like a Minecraft client!) needs to know from the implementation details of how the pods are put together. Finally, a ReplicaSet is the mechanism that’s used by Kubernetes to determine how many copies (or replicas) of a pod to run, and what to do if one or more of them fail.

Creating the cluster

So if that’s what Kubernetes gives you, then how do you use it? The first step is to create a new Kubernetes cluster. Now, Kubernetes itself can run on your local desktop (see minikube), but that’s not the approach we’re after here. We want to run our Docker containers in Kubernetes in the IBM Cloud, and to do that we need to use the services provided by the IBM Cloud Kubernetes Service.

The IBM Cloud Kubernetes Service allows you to create your own Kubernetes clusters within IBM Cloud. Now, think back to how we defined the parts of a Kubernetes cluster. A cluster consists of a master node and one or more worker nodes. In the IBM Cloud Kubernetes Service, you can create two types of clusters, which differ in how many worker nodes they can have, an in how you are billed for those nodes.

Every account is allowed to create one free lite cluster in the IBM Cloud Kubernetes Service. A lite cluster is a pre-sized, single-worker cluster that is suitable for development and for simple experiments like we’re doing with Minecraft. If you want to do more complex development of multi-host applications—or more importantly, deploy applications that are resilient and able to survive the loss of one or more worker nodes—then you need to deploy a standard cluster in the IBM Cloud Kubernetes Service. Doing so is beyond the scope of this tutorial, but it’s something that you should definitely learn more about by reading the documentation on the subject. In any case, IBM handles the management part by creating the master node the same way in each case. That becomes important a little later in this tutorial.

Let’s begin by creating our free lite cluster. Now, this set of instructions assumes that you followed the installation instructions for the IBM Cloud command-line tools and the IBM Cloud Kubernetes Service plugin that we referenced in Part 1 of this series. It also assumes that you haven’t already created your free Kubernetes lite cluster for development.

To begin, log into Ubuntu OS, open up a terminal, and type the following three commands:

            
bx login ‑a api.ng.bluemix.net

bx cs init ‑‑host https://us‑south.containers.bluemix.net

bx cs cluster‑create -name Minecraft
                

The first step is to log in to IBM Cloud. You’ll be prompted for your IBM Cloud user ID and password. If you’re using a federated ID (which is common if you have a corporate account for IBM Cloud or if you are an IBM employee), then follow the steps for using a one-time login here.

The next step initializes the cluster service. This is a one-time step in which you tell IBM Cloud where you want your clusters to physically be created. As of this writing, the options you have are to run your cluster in U.S. South (Dallas), Europe Central (Frankfurt), Europe South (London), or AP-South (Sydney). If you live close to any of the three latter regions, substitute one of the following addresses as appropriate after -host:

The final step creates your cluster. Since we are providing it with just one option (-name), that tells IBM Cloud to create a lite cluster. You can name the cluster anything you like, but we’ll use the name &Minecraft& for the rest of this series.

Note: It can take a while (several minutes or longer) to create a Kubernetes cluster. To determine if your cluster is ready for service, you can issue the following command:


bx cs clusters

The result of running that command on a lite cluster is shown here:


Listing clusters...
OK
Name        ID        State      Created                Workers   Datacenter   
Minecraft   XXXXXXX   normal   2017‑07‑19T01:36:50+0000   1         hou02

If the “State” result is anything other than “normal” (for instance, “pending”), then you should wait a few minutes and issue the command again. Once your cluster is in &normal& state, you can move on to the next step, which is to configure your cluster and set up kubectl (the command-line interface for managing Kubernetes clusters) on your local machine to be able to access the cluster.


bx cs cluster‑config Minecraft

This command downloads the YML file that allows kubectl on your machine to access the cluster. When you run it, you should see output that looks like this:


Downloading cluster config for Minecraft
OK
The configuration for Minecraft was downloaded successfully. Export environment variables to start using Kubernetes.

export KUBECONFIG=/home/kylebrown/.bluemix/plugins/container‑service/clusters/Minecraft/kube‑config‑hou02‑Minecraft.yml

Next, you need to select the last line (starting with export), copy it into your paste buffer, and then execute it at the command line (paste it from the paste buffer and then just hit return). To verify that you did it right, use the following command:


echo $KUBECONFIG

And then compare what is returned with the value of the KUBECONFIG environment variable that is returned from the cluster-config command. As a final verification step, run the following command:


kubectl version -short

If you see something like the following—


Client Version: v1.6.4
Server Version: v1.5.6‑4+abe34653415733

—then you executed the configuration correctly. If on the other hand, you see an error message, then try copying and re-exporting the environment variable.

Installing the container registry plugin

The next step is to install the container registry plugin. A container registry is simply a location where you can securely store your images for use by Docker in Kubernetes. The &free& plan for the IBM Cloud container registry has a limit of 512MB, which is just big enough to store the Minecraft image that you are building. To install it, execute the following two commands:


bx plugin install container‑registry ‑r Bluemix
sudo bx cr login

As you saw in Part 1, when you installed the Kubernetes cluster service plugin, the first command installs the container-registry plugin and sets the registry to be the default IBM Cloud registry.

The second command logs you into the container registry and allows you to use the space that’s set aside in your plan. In this case, you will be using the &free& plan for the container registry. However, it does have one disadvantage: It is very limited in how much disk space is allocated for each user—you can only store 512MB in the cloud, which is just big enough to store a single small image built on the Ubuntu image. In fact, you will have to make some changes to your Dockerfile in order to reduce the size of the image to fit. We will cover what those changes are and how that works in a later section.

Using the container registry

For now, though, the first thing you need to do with the container registry plugin is to create a namespace. A namespace is like a folder for images. In this case, you’ll create one that has the same name as your IBM Cloud user name. So if your login to IBM Cloud is “yourname@yourcompany.com,” then the name you will use should be just &yourname.& Substituting your username in the angle brackets below, execute the following two commands; the first creates the namespace, while the second lists out the set of namespaces and allows you to verify that it was created successfully.


bx cr namespace‑add <yourname>
bx cr namespaces

Now that you have validated that your namespace was created correctly, you need to build a local image with the right name or &tag.& However, before we do that let’s revisit the modifications we had to make to the Dockerfile in order to reduce its size.

You’re now ready to move on to building and pushing your first example to IBM Cloud. If you haven’t already done so, change directories to the directory spigot-plugin-bluemix.


cd spigot‑plugin‑bluemix

Now take a look at the Dockerfile that’s found inside that directory by typing:


cat Dockerfile

The contents of the file should match what you see in the listing below. You can also compare it to the Dockerfiles from Part 2 for reference.


#Version 0.0.4
#This version builds a spigot server
#using the recommended build strategy for spigot
#This is advantageous in that it's better for plugin development
#and fits well with the Docker approach
#it also adds a first Minecraft plugin into the bare spigot server
#
FROM ubuntu:16.04
MAINTAINER Kyle Brown &brownkyl@us.ibm.com&
RUN apt‑get update &&
    apt‑get install ‑y git &&
    apt‑get install ‑y default‑jdk &&
    apt‑get install ‑y wget &&
    mkdir minecraft &&
    wget "https://hub.spigotmc.org//jenkins/job/BuildTools/lastSuccessfulBuild/artifact/target/BuildTools.jar" ‑O minecraft/BuildTools.jar &&
    git config ‑‑global core.autocrlf input &&
    java ‑jar minecraft/BuildTools.jar ‑‑rev 1.12 &&
    rm ‑r Bukkit &&
    rm ‑r CraftBukkit &&
    rm ‑r Spigot &&
    rm ‑r BuildData &&
    rm ‑r apache‑maven‑3.2.5 &&
    rm ‑r work &&
    rm craftbukkit‑1.12.jar &&
    rm ‑r minecraft &&
    apt‑get purge ‑y ‑‑autoremove git wget
RUN echo "eula=true" > eula.txt &&
    mkdir plugins
ADD Tutorial.jar /plugins/Tutorial.jar
CMD java ‑Xms512m ‑Xmx1024m ‑jar spigot‑1.12.jar nogui
EXPOSE 25565

The first thing you’ll notice in the file is that we’ve grouped together many of the formerly separate RUN commands in the Dockerfile into one very long RUN command. We’ve done this through the standard UNIX shell script trick of concatenating commands together with the && operator. The second thing you’ll see is that, unlike in Part 2, after we build the spigot jar file we add commands to delete a number of directories with the recursive remove (rm -r) command. Finally, you’ll see that we’ve also removed the git and wget tools that we installed at the beginning of the installation (they were needed by BuildTools.jar).

All of this is done to save space. Combining commands reduces the number of layers in the Docker image. Think of a layer as being an entire virtual file system that is invariant; you can build another layer on top of it, but you can’t change the contents of it once it is built. A useful analogy might be to the way that a version of source code in a version control system is stored—once you commit a change it’s there forever, but you can always make another change on top of it.

With Docker, once a layer is added to the image, any disk space that’s used by that layer is taken up once and for all. Every Dockerfile command creates a new layer, as you’ve probably noticed by watching the output of the Docker build command. So even if you tried to delete the working directories that are used by BuildTools.jar (such as CraftBukkit, Bukkit, and Spigot), you wouldn’t save any disk space if you tried to delete those directories in a later layer. Thus, combining the RUN commands into one long command that includes setup, execution, and cleanup is a common Docker best practice for reducing image size.

Now that you’ve seen your new Dockerfile, you’re ready to use it. You can do this by executing a familiar command from the last couple of tutorials, but this time one that uses a longer tag. Note that you’ll preface the tag with the address of the IBM Cloud registry, then add the namespace, and finally the image name that follows the convention we adopted in the previous tutorials.

Execute this command at the command line to proceed—and as always, make sure you don’t forget the period (&.&) at the end!


sudo docker build ‑t registry.ng.bluemix.net/<yourname>/spigot‑plugin‑bluemix .

Now that your image is built, you will need to upload or push the image to the IBM Cloud container registry. You do that with the docker push command as shown here:


sudo docker push registry.ng.bluemix.net/<yourname>/spigot‑plugin‑bluemix

Executing this command can take several minutes, depending on your network speed. To verify that your image was successfully pushed, use the following command and look for your new image name in the list:


bx cr image‑list

Understanding configuration files

As we described at the beginning of this tutorial, a deployment is a configuration file that describes how pods are constructed and linked to services. In this case, the file format is YML rather than the Dockerfile format. I won’t go into the specifics of exactly how to construct a deployment file, but I will point out a few relevant details in this sample file. For more information on the file format and list of elements, refer to the Kubernetes documentation. Instead, read through the file below (named deploy.yml, which is also in the spigot-plugin-bluemix directory), and then we’ll highlight a few specific parts of it:


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: spigot
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: spigot
    spec:
      containers:
      ‑ name: spigot
        image: registry.ng.bluemix.net/brownkyl/spigot‑plugin‑bluemix
‑‑‑
apiVersion: v1
kind: Service
metadata:
  name: spigot‑service
  labels:
    run: spigot
spec:
  selector:
    app: spigot
  type: NodePort
  ports:
   ‑ protocol: TCP
     port: 25565

The most noteworthy thing about this file is that there are two sections, separated by the dashed line “---.” The first section describes the deployment itself, which creates a single pod (as shown by replicas: 1) that consists of just our spigot-plugin-bluemix image.

The second section describes a service that you are creating to expose the pod to the outside world. In this case, you’re using the absolute simplest type of exposure, called a NodePort. A NodePort is a way of routing requests to a specific port, which in this case is port 25565 (the port that’s exposed by the Docker image).

There are much more resilient and complex ways of routing to images in pods. What you are doing here is suitable for your purposes in testing, but for a real production application it has some drawbacks. In particular, if the Docker image fails (as it can easily do!), then there is no mechanism with NodePort for redirecting traffic to another instance of the pod. Likewise, there is no mechanism for routing traffic to two or more instances of the same pod to provide for scalability. These functions are provided by more complex routing options such as an Ingress Controller, which is beyond the scope of this series. (If you’re interested in learning how to do that, check out the Kubernetes documentation.)

In the meantime, though, you’re finally ready to deploy your first Kubernetes service!

Deploying the pod and service

Now that you’ve seen the contents of the deploy.yml file, you can understand what will be created when you execute that deployment. You can create a deployment by executing the create command in kubectl. Execute the following command at the command line to create your deployment:


kubectl create ‑f deploy.yml

Now, there are times when creating a deployment may fail for one reason or another. If your deploy.yml file has an error, or if you failed to deploy your image to the correct registry name matching the name in the deployment file, you may run into issues. If that happens, then you can clean it up with:


kubectl delete ‑f deploy.yml ‑‑all 

Then, you can try to create the pod and service again by re-issuing the kubectl create command, as above.

Testing the server

You’re almost finished with this experiment! Now that you’ve deployed your pod and created the service to access it, you can find out how to get to it from the outside. NodePort exposes a port on each of the worker nodes in your Kubernetes cluster. In the free tier of the IBM Cloud Kubernetes Service, there is only one worker node, so you should only see one IP address when you issue the following command:


bx cs workers Minecraft

When you run this command, the result should look like this:


Listing cluster workers...
OK ID Public IP Private IP  Machine Type   State Status   
kube‑hou02‑pa223babd2966d473cb031e7812024ce52‑w1   189.172.1.211   10.76.193.152   free    normal   Ready  

Make sure to copy down the Public IP address that you see above (for instance, 189.172.1.211). Now that you know the IP address that your Minecraft spigot server is running on, you need to determine what port the NodePort service has redirected to 25565 inside your image. You can find that out by issuing this command at the command line:


kubectl get svc

When you issue this command, you should see a result that looks like this:


NAME       CLUSTER‑IP   EXTERNAL‑IP       PORT(S)           AGE
kubernetes          10.10.10.1   <none>      443/TCP           14d
spigot‑service   10.10.10.217 <nodes>     25565:32225/TCP   8m

As in the previous step, copy down the second PORT (the one after the colon; in this case 32225). This is the combination of ip:port that you will now use inside the Minecraft client.

Testing the client

The process of testing the server is pretty much the same as it was in Parts 1 and 2. You can quickly see if it worked using the “Direct Connect” option in the Minecraft client. Log in to Minecraft, select “Multiplayer,” and then click on the Direct Connect button:

Testing the server with Direct Connect

Once you have done that, Minecraft will pop up a screen asking for your server address. Use the combination of public IP for your worker and the routed port from the previous steps, as shown here:

Providing the server address for Direct Connect

Once you have typed in the address and port, click on Join Server and you should be able to connect to the Minecraft server and see your virtual world!

Conclusion

In this tutorial, you’ve learned the steps necessary to set up a lite Kubernetes cluster using the IBM Cloud Kubernetes Service, push a minimal Minecraft Spigot server to the IBM Container Registry, and then build a deployment that creates a single-instance Kubernetes pod and NodePort service that allows you to connect Minecraft clients to a server running on IBM Cloud.

In Part 4, we will wrap things up by building a more complex plugin for Minecraft that uses the Watson Assistant service to allow you to talk with Watson and gather information about the diseases that are afflicting your Minecraft villagers. Until then, enjoy exploring the things that IBM Cloud gives you for managing and deploying Docker images!