Docker and microservices have become more or less analogous to powering the cloud. No conversation about the cloud is without these terms and advocates for microservices tend to pitch it as a remedy for large monolithic ailments. It’s also a solution to come up to speed with the rest of the new software out there that is being developed currently. There’s also a lot of confusion and hesitation, especially with enterprises, on moving completely to the cloud with these technologies. However, in this article, I can help with breaking down any misconceptions about them – these elements are very useful in helping enterprises and the individual developer in modernizing apps.

First, to understand microservices, we need to understand containers. For the purpose of this article, we’ll use Docker as our container example. So, what exactly are containers? To understand that, we need to understand virtual machines and how containers differ from them.

A brief history on virtual machines

A virtual machine (VM) exhibits the behavior of a separate computer. If we have to emulate different operating systems in one machine, that is called making a virtual machine. And that is what started the cloud movement.

Hypervisors

A hypervisor manages these virtual machines. Hypervisors create virtual machines and destroys them if we don’t use them. Hypervisor itself is an operating system. It can host an operating system (OS); one can be Linux and the other can be Windows. Binary files are present, according to whichever ones you can make into applications. There are two types of hypervisor: Type 1 hypervisor runs directly on baremetal. Type 2 hypervisor needs a host operating system, that provides virtualization services, such as I/O device support and memory management. VMware and virtual box are examples of type 2 hypervisors.

Disadvantages

VMs do have their disadvantages. Assuming that you’re using a type 2 hypervisor, you’ll find images of the Windows OS on it, from which you can package the binaries and run your application. For another gues OS, the process goes again. It’s like the movie “Inception” – this process can go deep and has multiple layers. And this adds a lot of load and makes spinning up your VMs a very slow process.

Enter: Containers

To solve these disadvantages, why not make a container where our OS will be the only one. But our core, the kernel, will share the binaries and application. So you wouldn’t have to install a guest OS. But then, why can’t we assume the base OS is Ubuntu, or Linux, or Mac?

I need a container that will run my apps, not my guest OS. And I need to spin those containers on the container region. That is the difference between virtual machines and containers.

There are lots of container options to choose from nowadays. One popular container option is Docker, and I’ll use it as an example of a container engine in this article. While Docker is the most widely used and recognized container technology, there are other technologies that are along the same vein – developers should be able to translate this knowledge to other containers. All technologies follow a similar concept for images and containers, but have some technical difference. Containers are also used by many enterprises that are trying to transition towards microservices and DevOps.

Docker architecture

The Docker architecture is broken in three different components and are all needed to set up your Docker environment:

  1. Docker Client: Communicates with the Docker host by sending it a CLI command that Docker can understand.
  2. Docker Host: This platform executes the request from the CLI Docker Client; the platform can be on your computer or in the cloud.
  3. Docker Registry: Stores Docker images.

Let’s go into more detail

The commands are where you tell them you need a specific container for where you’re issuing on terminal; that terminal is basically the Docker client. When you ask for an image/container, whether that image exists or not, the client goes to the Docker daemon/host and asks them for the relative image. If an image is available, the daemon spins (up and running) the image and if the image is not there, the daemon will go to its registry and try to find the image and download it.

The Docker registry is where you publish all the containers you might make, where they will be registered as well. This registry will get tagged according to the version. (The latest version is the tag you used when you push the latest image out to the Docker registry.) It is possible to have multiple versions of, for example, Python on Docker. If you go to hub.docker.com, you’ll be able to find multiple versions of the same image tagged in the Docker hub.

The registry is where the host pulls images. Running images are containers. Images are a frozen copy of your application. When you spin the image and it has memory and coordinates with the kernel, then it becomes a container. You can make multiple copies of an image and spin them. This will be covered later in the article. The registry is like GitHub for images. You can make local registers as well.

Docker host

Now let’s talk about the Docker host: why do we need one? When cloud deployments get complex, it’s not uncommon for you to run 15 daemons. Where those daemons are engaging 15 registries, then it’s beneficial to swap your Docker host with Docker swarm (we will talk about this later). You can run your containers like a cluster of containers in this way. Multiple Docker hosts can then run and coordinate with Docker Hub, making it easy for you to scale up and down. Note: The client, the Docker daemon, and the registry can be on the same machine.

Public registries

A common issue with public registries is that there is no guarantee that no image has no malicious code. The recommended practice is to ensure that your image is tagged with Docker official images. That means a Node repository with signed code from Docker official images means that no random person can pull the image and push their own versions.

Private registries

That’s why most companies use private registries. For example, being able to set up your own repository, be it on your own machine or on a server that is behind a firewall. For instance, a corporation wants a registry but doesn’t want it to be exposed to the public. A corporation, or even just a group of people, can set up a shared server somewhere with a private registry. In theory, all of that can have all of it on my own machine. But generally all these are separate.

Docker does provide options and open source capabilities that allows a local registry. You can host a Docker registry on your own hardware, where Docker provides an easy-to-deploy open source code for this task.

It is fairly easy to configure, but working with a third party tool might get complex. The only challenge is that it becomes hard to maintain. People generally assume that all the dependencies they might need will be available on public docker hub. If you re using a private repository that needs to use a public docker hub then it can be resolved over time.

Docker Hub’s free accounts allow users to make a private registry and allow collaborators.

IBM generally recommends using Helm charts for this purposes, and it’s part of our offering. (For more information, visit our introductory labs on using Helm.) We package our software into Helm charts and are available with all relevant versions to be used with containers.

Understanding microservices

So now that we’ve covered Docker as a container example, let’s jump into monolithic apps and how they relate to microservices and containers. Monoliths are usually hard to maintain over time. You won’t be able to fix an error in a monolithic app if you don’t overhaul the entire application. For example: You have a very basic airline application that has a reservation service, a payment service, and a cancellation service. An error occurred in the payment service and the backend needs to be tested. It can’t be tested unless you stop the database management software and runtime – that’s because you have everything in one package.

Multi-task with microservices

Microservices are meant to do one thing only, without requiring a lot of context that is provided to them. That means you can assign each microservice a particular task. You should be able to shut the images down and spin them easily, without disrupting other workstreams or actions. Microservices are also language agnostic. Microservices don’t tie you to one stack and saves you on a lot of dependency issues.

This cannot be done in a monolithic application. You can’t run the same application that requires five or six different libraries that expect different versions. Microservices can solve multiple dependencies in a complex application, very easily. Orchestrate everything in a way that you’re running multiple runtimes and libraries on one machine. They are all independently updated and upgraded components and you wouldn’t have to change the overall application, just the tasks/items that you want.

This is very important to enterprises because it entails horizontal scaling. And that is part of good economics; it saves money and makes the most out of hardware.

In vertical scaling, if a load increases, you might have to upgrade machines and servers. Vertical scaling is hard to implement. It fails eventually when the load goes above and beyond needs. It is not very agile as compared to horizontal scaling.

As an example of horizontal scaling, if you open an Amazon page, a few hundred microservices are invoked at the same time. Catalog browsing is one microservice, product availability is another, and so on. The microservices can figure out if load is more or a particular item has more demand, and then based on that information can spin up more containers and scale their application horizontally. An example of vertical scaling would be updating RAM or a machine.

Microservices fit nicely with the Docker paradigm. Docker allows you to build those microservices and helps you move towards the microservices architecture. With its rising popularity and ease of use, everyone is trying to convert outdated apps into the microservices-based architecture. It’s an important skill to have in your toolkit that is worth learning about now (it’s never too late to start!).

Microservices orchestration

Docker has three tools for microservices orchestration:

  1. Docker Machine: You can ask any cloud vendor for a Docker Machine instance. You can provide them a file and say I need this container, they will help you set up a Docker daemon, and you’ll be running the command line as if you’re somewhere on the cloud.
  2. Docker Composer: You would need it where you have one or more containers that support one use case. It allows you to define all your images and containers. It defines various ports or allies where containers can talk to each other. And using docker machine you can run this on the cloud.
  3. Docker Swarm: Allows you to orchestrate on a much larger scale.

In this many-to-many (m:n) relationship between microservices and containers, a good rule of thumb is one microservice per container. The concept of dynamic scaling is better addressed if you host one microservice per container.

By definition, a microservice needs to have three components:

  • User interface
  • Business logic
  • Data connection

Microservices should also be autonomous with all the libraries available within the container.

Steps to implement microservices with Docker

Each microservice will have an associated overhead and this complexity must be dealt with. Deployments are more complex and sometimes it is hard to define up front. There has to be a protocol for microservices to communicate. For this, DevOps are high in demand. Check out our DevOps page for tutorials.

Getting started with Docker

Docker can help ease these issues and is a great companion to microservices, so let’s learn how to get started with Docker and images using CLI.

  1. Download Docker

    As you know, images are stored on a registry. On a Mac, use the command: brew install docker For Windows, download Docker and double-clock InstallDocker.msi to run the installer. For Linux, use the command: $ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io Once Docker is up and running, let’s go to step 2.

  2. Check to see the status of any container(s) Use command: docker ps

     zohwak - mbp :~ docker ps 
     CONTAINER ID       IMAGE         COMMAND     CREATED      STATUS         PORTS         NAMES
    

    You will see that no images have been spun and no container is up and running. Let’s go and spin an image.

  3. Spinning a container On a Mac, use the command: docker run nginx

     Zohwas-Macbook-Pro:~ zohwakarim$ docker run nginx 
     Unable to find image ‘nginx: latest’ locally 
     latest: Pulling from library/nginx 
     f7e2b704e: Pull Complete 
     08dd01e3f3c: Pull Complete
     D9ef3a1eb792: Pull Complete
     Digest: sha256 : 98efe605f6123559fnf0428475fnfhw98408072b208hdwru49863gufg
     Status: Downloaded ewer image for nginx:latest
    

    Nginx is a web server. This is my client terminal command and it is going to ask the Docker daemon what’s running on my machine. The docker client went ahead and contacted the docker daemon to check if the image was available. The dddocker daemon will check the registry to see whether the image has been downloaded. If not, it will go ahead and download it. Then it will go ahead and spin up the image.

  4. Show containers running on Docker

    Open another terminal to inspect all containers running on docker. Container ID, image name, port, name (it auto generates a funny name)

    Use command: docker ps -a

     zohwas - mbp :~ zohwakarim$ docker ps -a 
    
     CONTAINER ID   IMAGE   COMMAND                  CREATED         STATUS              PORTS    NAMES
     2182006d1960   Nginx   “nginx -g ‘daemon of’    4 minutes ago   up 4 minutes ago    80/tcp   cocky_chaplygin 
    
     CONTAINER ID   IMAGE    COMMAND                 CREATED         STATUS              PORTS    NAMES
     B421d65ae652   Nginx    “nginx -g ‘daemon of’   22 seconds ago  up 20 seconds ago   80/tcp   gifted_hermann
    

    Check your port 80 on a web browser to see of the image is running. If it gets an error, open the docker file on nginx repository and check which port needs to be exposed. By default, the port is 80 as you can see above.

    The issue was that our OS wasn’t told that the container on port 80 should be accessed from a specific host port. We will have to expose it and map it to a local port.

  5. Mapping to a local port

    Kill the container by pressing Control C.

    Confirm by command: docker ps

    Now issue the command: docker run -P nginx

    This command will tell the image to map the exposed port with any available machine port as well.

    Confirm by command: docker ps

    The container ID will change because our port has changed. Check the localhost and you will see the welcome page of the nginx image. Congratulations on spinning your first container!

  6. Spin another container by using an existing image If you type in command docker -P nginx you can spin another container.

    Check with the command docker ps again.

    You will notice that port 80 will be mapped to a new available port of the host machine. Containers are self maintained, so it doesn’t matter which port on the host machine they’re running on. This is how you can spin a container using an existing image.

  7. Execute commands in container

    This step is for when you don’t want to see the container on the web browser but you still want to go in the environment of the container and execute commands. This is generally used for debugging. You can look at some logs that are generating.

    By default when you run your container, it will issue the service command to run it. But you can say, I don’t want to issue the command, I only want to execute a particular service. So go in the bin directory and execute bash shell and the output in interactive terminal. Note that the root will change.

    Run command: docker run -it nginx /bin/bash

    IT is short for interactive terminal. I can listen and type to get the response in the container environment.

    Ask: whoami

    Ask: ls (list all the folders)

    Ask: hostname

    Here’s what it should look like:

     zohwas - mbp :~ zohwakarim$ docker run -it nginx bin/bash
     root@101d350c1f68: /#
    
     Ask: whoami
     root@101d350c1f68: /#whoami
     root
    
     Ask : ls
     bin    boot     dev    etc    home   lib   lib64     media     mnt   opt    prok   root   run.    sbin     srv
     root@101d350c1f68: /#
    
     Ask: hostname
     root@101d350c1f68: /# hostname
     101d350c1f68
     root@101d350c1f68: /#
    

    The container ID will be the same as the hostname. Do: Exit (to go back)

     Do: exit
     root@101d350c1f68: /# exit
     exit 
     zohwas - mbp :~ zohwakarim$
    
  8. Stop the container.

    If you want to stop the container, use the command: docker stop (container ID or name)

     zohwas - mbp :~ zohwakarim$ docker stop 101d350c1f68
     101d350c1f68
    

    Virtual machine has stopped. All of the files are there – they have not been deleted or removed from disk.

    Run command: docker ps

     zohwas - mbp :~ zohwakarim$ docker ps
     CONTAINER ID       IMAGE         COMMAND     CREATED      STATUS         PORTS         NAMES
    

    As you can see, the container has been stopped but not killed.

  1. Delete the container.

    If you want to delete your container, run the command: docker rm (name or ID)

     zohwas - mbp :~ zohwakarim$ docker rm 101d350c1f68
     101d350c1f68
     zohwas - mbp :~ zohwakarim$
    

    Images are basically the frozen copies of the various images pulled from the repository with their versions tagged. When you spin them they become containers.

  2. Delete images.

    When you want to delete the image, run the command: docker images

    zohwas - mbp :~ zohwakarim$ docker images 
    REPOSITORY       TAG         IMAGE ID              CREATED           SIZE 
    Nginx                      latest        881bd08c0b08      3 weeks ago     109mb
    

    Run command: docker images | grep nginx (to see only images with name nginx)

    As you can see, I only have one image downloaded.

    To remove images, run the command: Docker rmi -f nginx

    zohwas - mbp :~ zohwakarim$ docker rmi -f nginx 
    Untagged: nginx:latest 
    Untagged: nginx@sha256 : 98efe605f6123559fnf0428475fnfhw98408072b208hdwru49863gufg
    Deleted: sha256 : er4564g23559fnf0428475d5664wru49863gufgert456dhh6eh
    zohwas - mbp :~ zohwakarim$
    

    You have to use the force flag to kill the image.

Next steps

Congratulations, you successfully learned about Docker, microservices, and how to break down your monolithic apps. For further reading, make sure to check out these resources.