We’re giving away 1,500 more DJI Tello drones. Enter to win ›
by Marek Sadowski Published January 18, 2019
So you’re interested in securing a container-based system? Then this blog post is for you. Read about some of my techniques to improve the architecture, design, and practices of your containerized application.
First, I think it’s important for you to understand the difference between using a container versus a Virtual Machine (VM). Basically, when you use a Virtual Machine there is a necessity to run an entire Operating System (OS) to get your application operational. VMs will typically have potential security surface areas of attack, like open ports (or starting sshd). This means that you need to take care of all the OS vulnerabilities first, close all unnecessarily opened ports before you expose a VM-based application to the users (for both the Internet and an Intranet). In addition, you need to limit the availability of standard applications that you don’t need in the OS, like sshd, to decrease the surface of the potential attack. Going forward, you need to update the OS with the latest security and functionality fixes.
When using containers, you might slim down the OS to only the files, libraries, daemons, and parts of the system that you’re going to use in your application. This limits the amount of the exposures of your system in comparison to VMs that support your application. In some cases it might be possible to start from the scratch image (an empty, non-OS image), and add the libraries that are necessary for your executable.
Let’s use Docker as an example of a container for the purposes of this blog. Most Dockerfiles start from a parent image. If you need to completely control the contents of your image, you might start from scratch by creating a base image.
One more good practice that I suggest: don’t run your services as root and avoid using privileged mode, just in case there’s a security hole. You might want to put limits on the resources that can be used by the container to prevent noisy neighbor effects. Reference Docker best practices on how to do so.
As you probably realize, storing sensitive data in the environment variables it is not too good idea for production systems. The environment variables can be easily discoverable via system process examination. Therefore I’d suggest that you add a secret to the cluster (the file including passwords to your system or sensitive info, like API keys, tokens, usernames and passwords) rather than in environment variables. Here’s an example of the secret file I’m referring to from the jpetstore demo:
kubectl create secret generic mms-secret --from-file=mms-secrets=./mms-secrets.json
The mms-secrets.json file holds the API keys, tokens, and user numbers:
"jpetstoreurl": "http://jpetstore.<Ingress Subdomain>",
"note": "It may take up to 5 minutes for this key to become active",
The best practice of storing secrets to your cluster suggests that implementing the repeatable CI/CD pipeline includes fixes, security patches, the use of capabilities, and use secrets for sensitive information. And thanks to CI/CD pipeline all the fixes, security patches will be applied when they need to be quickly and easily, and not as one-offs/hacks. The above mentioned jPetStore Modernization Demo example implements such a toolchain.
You can refer to this article for more information on the CI/CD-based security implementation for containers.
When you build your own image repository, you should start from performing overall vulnerability scans of your container images. There are automated mechanisms implemented in the enterprise grade repositories, like the one that is based in the IBM Cloud®.
Find more information on securing Docker containers with Vulnerability Advisor. The Vulnerability Scanner provides developers with the instant feedback on security policies and package vulnerabilities. The IBM Cloud repository provides the image scan feature. See all the vulnerabilities that were detected with some suggested resolutions:
See the full security evaluation for the MySQL db image in the example above.
Another thing you can do when dealing with security issues is to sign your images for your containerized applications with IBM Cloud Container Registry.
Then you can enforce the policy to use only signed images from the enterprise registry.
See the example of the policy with the enforced signed images (at the time of this blogpost the functionality is in the BETA):
# This enforces that all images deployed to this cluster pass trust and VA
# To override, set an ImagePolicy for a specific Kubernetes namespace or modify this policy
- name: "*"
I also want to mention this useful tool, IBM Image Scanner Tool, which allows you to scan container images even if they’re not hosted in the IBM enterprise-grade registry. It might be a good idea for you to add it to a CI/CD pipeline since there’s a REST API to access the tool.
Yet another aspect of container security is in the runtime, where monitoring and securing containers are already in production. For any size of the system you should make sure all of the connections between the various microservices are actually needed, in an attempt to reduce the complexity or reduce risk of rogue services causing havoc. One of the known systems doing exactly this task is the monitoring system designed by NeuVector. You can see the complex view on the system below:
“I’d recommend that you take a serious look at what’s running inside your container network.” Jon Deeming, VP at Experian
The subject of container security is a vast field, and each section is only the tip of an iceberg. I will try to uncover those concepts that I barely scratched here and the new ones in the follow-up posts. If you want to hear more, please clap for it! Or, you know, tweet me @blumareks.
This article introduces IBM best practices for implementing a Continuous Integration/Continuous Delivery (CI/CD) secure container image pipeline for your Kubernetes-based…
Introducing IBM's Image Scanning Service, a free way to scan container images for vulnerabilities.
Back to top