Overview

Skill Level: Beginner

Kubernetes today does not provide ways to support PersistentVolume fuse mounts. This basic tutorial should help deploying Kubernetes pods with a FUSE based file system backed by an IBM COS bucket.

Ingredients

1. Access to an IBM COS bucket.

2. Container Registry such as DockerHub or Container Registry.

3. Kubernetes CLI has been installed on the client system.

 

Step-by-step

  1. Create bucket on IBM COS

    • Login to your IBM Cloud Console and navigate to Object Storage. If you do not have an Object Storage instance, find information here.
    • Create a bucket with a desired name, resiliency, location and storage class.

     

    Screen-Shot-2018-04-18-at-3.22.17-PM

    • Create new credentials or use the credentials that already exists.

     

    Screen-Shot-2018-04-18-at-3.29.58-PM

     

    • Note down the access keys (access_key_id) and secret access keys (secret_access_key) for this credential.
  2. Extend your docker images to add s3fs fuse

    In this example we extend the Nginx image to add s3fs to it.

    •  Create a docker image with s3fs and helper (launch.sh) script

    FROM nginx:latest

    MAINTAINER aaaa@bbb.com

    RUN DEBIAN_FRONTEND=noninteractive apt-get -y update –fix-missing && \

        apt-get install -y automake autotools-dev g++ git libcurl4-gnutls-dev wget \

                           libfuse-dev libssl-dev libxml2-dev make pkg-config && \

        git clone https://github.com/s3fs-fuse/s3fs-fuse.git /tmp/s3fs-fuse && \

        cd /tmp/s3fs-fuse && ./autogen.sh && ./configure && make && make install && \

        ldconfig && /usr/local/bin/s3fs –version

    RUN DEBIAN_FRONTEND=noninteractive apt-get purge -y wget automake autotools-dev g++ git make && \

        apt-get -y autoremove –purge && apt-get clean && \

        rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

    ADD launch.sh /

     

    • The launch.sh script is a helper script that creates the prerequsite files needed for starting s3fs on the container

    #!/bin/bash

    AK=${AWS_KEY//$’\n’/}

    echo “${AK}:${AWS_SECRET_KEY}”>/etc/passwd-s3fs

    chmod 400 /etc/passwd-s3fs

    mkdir ${MNT_POINT}

    echo “Starting s3fs”

    /usr/local/bin/s3fs ${S3_BUCKET} ${MNT_POINT} -o use_path_request_style -o url=${S3_ENDPOINT}

     

    • Build your docker image

    docker build  -t repo/name -f ./nginx-withs3fs .

     

    • Push your image to a container repository. 

    docker push reponame/imagename:latest

  3. Load your AWS Keys into the Kubernetes Secrets

    • Base64 encode your access keys and secret access keys and create secrets.yaml file

    apiVersion: v1

    kind: Secret

    metadata:

      name: s3fs-secret

      namespace: default

    type: Opaque

    data:

      # base64 encoded keys

      # echo -n “AWS_KEY|AWS_SECRET_KEY” | base64

      aws-key: <Base64 encoded access keys>

      aws-secret-key: <Base64 encoded secret keys>

     

    • Load your keys using the kubectl command

    kubectl create -f ./secrets.yaml

  4. Create your Pod Configuration file

    • Your Pod configuration file could look like this. Save your file in a known location.

     

    apiVersion: v1

    kind: Pod

    metadata:

      name: nginx-s3fs

      labels:

        app: s3fs-nginx

    spec:

      containers:

      – name: nginxs3fs

        image: <reponame>/<imagename>:latest

        imagePullPolicy: Always

        securityContext:

         privileged: true

        env:

     

        – name: S3_BUCKET

          value: ait-kubernetes-5

        – name: MNT_POINT

          value: /data

        – name: S3_ENDPOINT

          value: https://s3-api.us-geo.objectstorage.softlayer.net

        – name: AWS_KEY

          valueFrom:

           secretKeyRef:

            name: s3fs-secret

            key: aws-key

        – name: AWS_SECRET_KEY

          valueFrom:

           secretKeyRef:

            name: s3fs-secret

            key: aws-secret-key

        volumeMounts:

        – name: devfuse

          mountPath: /dev/fuse

        lifecycle:

          postStart:

            exec:

              command: [“/launch.sh”]

      dnsPolicy: ClusterFirst

      restartPolicy: Never

      securityContext: {}

      terminationGracePeriodSeconds: 30

      volumes:

      – name: devfuse

        hostPath:

         path: /dev/fuse

     

     

    The environment variable section will pass the environment to launch helper script. When the pod starts, the launch script gets executed that starts the s3fs.

  5. Create your Pods on IBM Cloud

    Execute the command to create the pod on the Kubernetes cluster on IBM Cloud.

    kubectl create -f <pod-configuration-file>.yaml

  6. Test your integration with IBM COS

    • Login to your pod

    kubectl exec -it <podname> — /bin/bash 

     

    • Navigate to /data and execute:

    touch tttt5

     

    • You can verify that the object was created on COS by logging into the IBM COS Console.

Join The Discussion