Overview

Skill Level: Beginner

Some basic understanding of Node.js and Kubernetes

What is Helm? It's a dependency management tool for Kubernetes. When deploying apps to Kubernetes, Helm makes it incredibly easy to version deployments, package them, and release them. This simplifies installing, upgrading, and rollback of apps.

Ingredients

To get started, users should have an elementary understanding of Kubernetes and have installed Minikube locally. Once you have the environment set up, then download the latest release of Helm. In this tutorial, I will demonstrate some basic concepts around deploying Docker containers using Helm Charts extending what we learned in the previous article Orchestrate secure etcd deployments with Kubernetes and Operator. 

Step-by-step

  1. Background

    The origin of this tutorial started as a way for me to learn about Kubernetes Helm Charts.  For those not familiar with Helm Charts, Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. The steps that we will go through in this recipe will be based upon various articles that I have read and will build upon my etcd Operator project that demonstrated how to enable transport layer security between my application and etcd. In these steps, we will leverage some of the standard Helm commands to create the initial project structure. I reused several YAML files from my etcd Operator article. If you worked through those steps, you can reuse the files, too.

  2. Setup

    Prior to using Helm, you need to first install Tiller which is the server side component of Helm.  I found that the simplest way to get Helm installed is to go directly to the kubernetes/helm/releases page on GitHub.

    To find out which cluster Tiller would install to, you can run kubectl config current-context or kubectl cluster-info.

    $ kubectl config current-context

    Once you have Helm ready, you can initialize the local CLI and inject Tiller into your Kubernetes cluster in one step:

    $ helm init

    This will deploy Tiller to Kubernetes cluster you saw with kubectl config current-context.  To review, Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts.

  3. Creating a Chart Template

    Helm provides an option to create a template. Templates are often used in devOps pipeline to inject metadata abour your deployments and allows them to be queryable after deployment with the Kubernetes CLI.   We will use this to get started since it provides the necessary directory structure and template files we will modify to build our template. Using the command line type:

    $ helm create etcd_chart

    This command will generate a set of stubbed out subdirectories and template files under etcd_chart that look like the following:

    $ ls -R etcd_chart
    Chart.yaml charts templates values.yaml

    etcd_chart/charts:

    etcd_chart/templates:
    NOTES.txt _helpers.tpl deployment.yaml ingress.yaml service.yaml

    Now that we have the structure in place, we are going to delete the stubbed out templates directory and incrementally build up our own chart for etcd.

    $ rm -rf etcd_chart/templates/*.*

     

  4. Installing a Chart

    In this recipe, the goal is to build a Helm Chart based upon a proven cluster definition file, such as the etcd operator included in the prior article. Create the following YAML file in the templates directory of your etcd_chart project.

    example-etcd-cluster.yaml

    apiVersion: "etcd.coreos.com/v1beta1"
    kind: "Cluster"
    metadata:
    name: "example"
    spec:
    size: 5
    version: "3.1.4"
    TLS:
    static:
    member:
    peerSecret: etcd-server-peer-tls
    clientSecret: etcd-server-client-tls
    operatorSecret: operator-etcd-client-tls

    For a simpler use case, you are welcome to use the example-etc-cluster without TLS that was created in the prior article. If you decide to use this version, please verify that the Kubernetes Secrets for the digital certificates have already been set prior to using this version with TLS.

    Now that we have the YAML file created, we are ready to deploy the Helm Chart using the install command.  This will deploy the etcd cluster to Kubernetes to the default cluster that was obtained when we retrieved the current-context previously.

    $ helm install ./etcd_chart/

    NAME:   peddling-emu
    LAST DEPLOYED: Wed May 24 09:12:25 2017
    NAMESPACE: default
    STATUS: DEPLOYED

    RESOURCES:
    ==> v1beta1/Cluster
    NAME     KIND
    example  Cluster.v1beta1.etcd.coreos.com

     

    Helm supports retrieving resources via the get command based upon the name that Helm generates during install time.   In the above installation step, the name peddling-emu was randomly generated and can be used to retrieve the chart that was just deployed.

    $ helm get peddling-emu
    REVISION: 1
    RELEASED: Wed May 24 09:12:25 2017
    CHART: mychart-0.1.0
    USER-SUPPLIED VALUES:
    {}

    COMPUTED VALUES:
    image:
    pullPolicy: IfNotPresent
    repository: nginx
    tag: stable
    ingress:
    annotations: null
    enabled: false
    hosts:
    - chart-example.local
    tls: null
    replicaCount: 1
    resources:
    limits:
    cpu: 100m
    memory: 128Mi
    requests:
    cpu: 100m
    memory: 128Mi
    service:
    externalPort: 80
    internalPort: 80
    name: nginx
    type: ClusterIP

    HOOKS:
    MANIFEST:

    ---
    # Source: mychart/templates/example-etcd-cluster.yaml
    apiVersion: "etcd.coreos.com/v1beta1"
    kind: "Cluster"
    metadata:
    name: "example"
    spec:
    size: 5
    version: "3.1.4"
    TLS:
    static:
    member:
    peerSecret: etcd-server-peer-tls
    clientSecret: etcd-server-client-tls
    operatorSecret: operator-etcd-client-tls

    In the output above, you will notice that there are some details that were not part of the original YAML file. These details were copied from the baseline etcd project we were using. The Helm template contained a set of pre-configured values in their values.yaml file that we do not require. In the next step of the recipe, we will create our own custom YAML file that will remove these settings, since they are outside of the scope of this article.

  5. Embracing Helm template language

    Helm provides a rather expansive template guide that provides a great overview of the power of templates. In this recipe, we will demonstrate leveraging the built-in objects and also some custom objects that are generated as part of our delivery pipeline.

    To demonstrate the power of templates, I modified my devOps pipeline to generate a values.yaml file that contains build information and created a new file called configmap.yaml that retrieves these values during deployment .   This file provides static values that can be populate properties such as labels from the context of the Helm chart.  These values injected into Helm during deployment time and are queryable by Helm after the chart is deployed.

    etcd/templates/values.yaml

    commitRef: "06e798c"
    buildTimestamp: "20170524_23-59-59"

     

    etcd/templates/configmap.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mychart-configmap
    labels:
    name: "etcd-operator-chart"
    releaseName: "{{.Release.Name }}"
    releaseService: "{{.Release.Service }}"
    chart: "{{.Chart.Name}}-{{.Chart.Version}}"
    commitRef: "{{.Values.commitRef}}"
    buildTimestamp: "{{.Values.buildTimestamp}}"

    data:
    myvalue: "Hello World"

     

    In Kubernetes, a ConfigMap is a simple way for storing configuration data. Other things, like pods, can access the data in a ConfigMap.  When we install the Helm Chart this time with the latest configmap.yaml template, I can start to provide detailed properties about our devOps pipeline.   Tracking information about the status of the build to allows to correlate our deployed artifacts to recent commits into our source code repo.    In this example, we are including the build timestamp and commit reference from our source code repo as custom properties.   We are also using the Built-in object Release to obtain metadata about the Helm chart installation.  

    To obtain these values at runtime, users can query Helm to get the manifest by using the name that was generated during install time. In this example, the name of the deployment was¬†‚Äúcontrasting-boxer‚ÄĚ.

    $ helm get manifest contrasting-boxer

    ---
    # Source: etcd/templates/configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mychart-configmap
    labels:
    name: "etcd-operator-chart"
    releaseName: "contrasting-boxer"
    releaseService: "Tiller"
    chart: "etcd-0.2.0"
    commitRef: "06e798c"
    data:
    myvalue: "Hello World"

    Now that we have the templating flow implemented, we can start to expand out to leveraging Kubernetes Labels for deploying to clusters as well.  In the next section we will show how to expand our usage of variables and labels that can be used for our pod deployments.

     

     

  6. Attaching metadata to pod deployments

    One of the pain points for large scale deployments is traceability of deployment artifacts back to source code repositories. In this section, we demonstrate how we can attach metadata to our deployment YAML file to include details about our Helm install and source code repo. In our deployment.yaml, we leverage the same context variables that were used in our configmap.yaml. To keep this focused on simplicity, we will just replicate the same properties to validate that we can leverage the same properties.   This will allow non-Helm users to be able to access the same build properties using the Kubernetes CLI and also allow me to cross reference the deployment across both the Helm and Kubernetes APIs to verify that my deployment worked.

     

    deployment.yaml

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: etcd-operator
    spec:
    replicas: 1
    template:
    metadata:
    labels:
    name: "etcd-operator-chart"
    releaseName: "{{.Release.Name }}"
    releaseService: "{{.Release.Service }}"
    chart: "{{.Chart.Name}}-{{.Chart.Version}}"
    commitRef: "{{.Values.commitRef}}"
    buildTimestamp: "{{.Values.buildTimestamp}}"
    spec:
    containers:
    - name: etcd-operator
    image: quay.io/coreos/etcd-operator:v0.2.6
    env:
    - name: MY_POD_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
    - name: MY_POD_NAME
    valueFrom:
    fieldRef:
    fieldPath: metadata.name

     

    To keep this focused on simplicity, we will just replicate the same properties to validate that we can leverage the same properties.    When we install this Helm chart with this deployment change, we can now see in our pod description the same values that we had set previously in our helm chart.

     

    $ kubectl describe pod etcd-operator-3005415279-6x5p
    Name: etcd-operator-3005415279-6x5pq
    Namespace: default
    Node: minikube/192.168.99.100
    Start Time: Wed, 24 May 2017 11:04:23 -0400
    Labels: buildTimestamp=20170524_23-59-59
    chart=etcd-0.2.0
    commitRef=06e798c
    name=etcd-operator-chart
    pod-template-hash=3005415279
    releaseName=contrasting-boxer
    releaseService=Tiller
    Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"etcd-operator-3005415279","uid":"4550aef7-4092-11e7-9759-0800277...
    Status: Running
    IP: 172.17.0.5
    Controllers: ReplicaSet/etcd-operator-3005415279
    Containers:
    etcd-operator:
    Container ID: docker://a985c9c6d9274556d9d11d501e960b633ff588dcd74653240e7c6938a4292e7a
    Image: quay.io/coreos/etcd-operator:v0.2.6
    Image ID: docker://sha256:4c687be57708708748b7f91f056c472fc9309ab19fc057b8321beb651e290299
    Port:
    State: Running
    Started: Wed, 24 May 2017 11:04:23 -0400
    Ready: True
    Restart Count: 0
    Environment:
    MY_POD_NAMESPACE: default (v1:metadata.namespace)
    MY_POD_NAME: etcd-operator-3005415279-6x5pq (v1:metadata.name)
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-mw15j (ro)
    Conditions:
    Type Status
    Initialized True
    Ready True
    PodScheduled True
    Volumes:
    default-token-mw15j:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-mw15j
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: <none>
    Events:
    FirstSeen LastSeen Count From SubObjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    40m 40m 1 default-scheduler Normal Scheduled Successfully assigned etcd-operator-3005415279-6x5pq to minikube
    40m 40m 1 kubelet, minikube spec.containers{etcd-operator} Normal Pulled Container image "quay.io/coreos/etcd-operator:v0.2.6" already present on machine
    40m 40m 1 kubelet, minikube spec.containers{etcd-operator} Normal Created Created container with id a985c9c6d9274556d9d11d501e960b633ff588dcd74653240e7c6938a4292e7a
    40m 40m 1 kubelet, minikube spec.containers{etcd-operator} Normal Started Started container with id a985c9c6d9274556d9d11d501e960b633ff588dcd74653240e7c6938a4292e7a

    When we include these labels with the pod deployments, we link the running deployment of this pod to the source code repo that was used to generate the deployed artifact. In addition, we can leverage the power of Kubernetes around placement and affinity and target specific pods for our deployments based upon this metadata.

  7. Conclusion

    Kubernetes and Helm Charts provide a solid foundation for orchestration container deployments at scale. ¬† In this article, we touched upon some of the most common use cases for Kubernetes and Helm Charts based upon a general etcd Operator deployment. ¬†In using Helm’s powerful templating language on top of Kubernetes, developers can simplify the management of their deployments by embracing Helm’s Built-In Objects and custom data types to increase transparency of your DevOps pipeline. ¬†As DevOps squads move more aggressively towards container based deployments, the capabilities around these technologies will accelerate deployments and provide highly available and scalable frameworks for developers to leverage. ¬† ¬†

Join The Discussion