Overview

Skill Level: Intermediate

Broad Intermediate

This recipe illustrates Dockerfile configurations necessary to support container operation within an IBM Kubernetes POD Service.
Other topics by the author in the "Automation Series" can be found at: https://developer.ibm.com/recipes/author/patemic/

Ingredients

Linux

Docker

Kubernetes

IBM Cloud

Step-by-step

  1. Application Arhitecture

    An automation portal can be decomposed into components: Model (representing data handling), View (HTML/CSS) and Controller (Angularjs).  A portal app architecture based on the Model/View/Controller framework is illustrated in the recipe: https://developer.ibm.com/recipes/tutorials/the-model-view-controller-mvc-as-applied-to-automation/

     

  2. Application Access and Availability

    Public access to the application should be made available.  This can be accomplished via Dockerfile and Kubectl configurations within an IBM Kubernetes POD as described in recipe: 

    https://developer.ibm.com/recipes/tutorials/custom-app-as-multiple-containers-in-k8s-pod/

     

  3. Kubernetes and Dockerfiles

    Dockerfile accommodations suporting Kunernetes Services are decomposed into the following subsections:

    1. Combined “Model” and “View” Dockerfile and execution commands “wrapper”
    2. Controller Dockerfile and execution commands “wrapper”
    3. NoSQL Dockerfile and execution “wrapper”

     

    Much of the task of enabling Docker containers to run in a Kubernetes POD are described in the link in step 2 above. The remaining compatibility steps center around associating the Kubernetes Service host name and identity with your web app and keeping the containers from prematurely closing once your app is launched in a POD. 

  4. Model and View: Dockerfile

    Common “Model” Dockerfile confiuguration commands follow:

    ############################################################
    # Dockerfile to build python/Flask container
    # Based on Ubuntu
    ############################################################

    # Set the base image to Ubuntu
    FROM ubuntu:latest

    # File Author / Maintainer
    MAINTAINER Execllent IBM Programmer

    # Update the repository sources list
    RUN apt-get update

    ################## BEGIN INSTALLATION ######################
    #
    #install curl for debug purposes
    RUN apt-get install curl -y
    ################## Python ####################
    #install python 2.7
    RUN apt-get install python -y
    RUN apt-get update -y
    ################## apache server ##############
    #install apache server :¬† yes, the Python app becomes a “native” web server app thanks to….. see below
    RUN apt-get install apache2 -y  :  we need a reverse proxy engine for our web app
    RUN apt-get update -y
    #install wsgi¬† ¬† ¬† ….. thanks for web appp capabilities for Python belong to wsgi
    RUN apt-get install libapache2-mod-wsgi python-dev -y
    RUN apt-get update -y
    #enable wsgi
    RUN a2enmod wsgi
    #
    #########create Flask directory structure########
    #
    RUN mkdir /var/www/YourWebAppBase
    RUN mkdir /var/www/YourWebAppBase/static
    RUN mkdir /var/www/YourWebAppBase/templates
    #
    ##############copy Flask App to Container ###########
    COPY MyPython.py /var/www/var/www/YourWebAppBase/MyPython.py
    COPY HTML.html /var/www/var/www/YourWebAppBase/templates/HTML.html
    RUN chmod +x /var/www/var/www/YourWebAppBase/MyPython.py
    ############ Virtual env #####################
    #
    RUN apt-get install python-pip -y
    RUN apt-get update -y
    ############# Flask ##########################
    RUN pip install Flask
    #
    #install additional Flask, Bower, Mongo components:  is MongoDB your backend??
    RUN pip install flask-bower
    RUN pip install pymongo    
    # Update the repository sources list once more
    RUN apt-get update -y
    #############enable virtual host #################
    # RUN a2ensite in wrapper.sh, not here
    # copy the wsgi file into the container ; enable python app as web app
    COPY FlaskApp.wsgi /var/www/var/www/YourWebAppBase/FlaskApp.wsgi
    # expose port 80, since port 80 (or your spec’d port) is the server port as identified in the FlaskApp.conf
    EXPOSE 80

    #VH sh file will create an Apache virtual host based on the hostname of the Pythion app

    COPY apache-VH-conf.sh apache-VH-conf.sh

    # copy cmd wrapper script including sh and executables necessary for Pythin/Flask to run as web app in Kubernetes
    COPY wrapper.sh wrapper.sh
    RUN chmod +x apache-VH-conf.sh
    RUN chmod +x wrapper.sh
    RUN apt-get upgrade -y
    # Set default container command
    CMD ./wrapper.sh

     

  5. Model and View: Supporting and Command Files

    Necessary File to support Apache VH, Flask and app execution include:

    1) apache VH file: is copied into the container first and then executed in the wrapper.sh, *after* the Service Hostname has been created and loaded into the environment

    apache-VH-conf.sh

    echo “<VirtualHost *:80>”

    echo ” ServerName $Svc_Host”

    echo ” ServerAdmin root@$Svc_Host”

    echo ” WSGIScriptAlias / /var/www/YourWebAppBase/FlaskApp.wsgi”

    echo ” <Directory /var/www/YourWebAppBase/>”

    echo ” Order allow,deny”

    echo ” Allow from all”

    echo ” </Directory>”

    echo ” Alias /static /var/www/YourWebAppBase/static”

    echo ” <Directory /var/www/YourWebAppBase/static/>”

    echo ” Order allow,deny”

    echo ” Allow from all”

    echo ” </Directory>”

    echo ” ErrorLog ${APACHE_LOG_DIR}/error.log”

    echo ” LogLevel warn”

    echo ” CustomLog ${APACHE_LOG_DIR}/access.log combined”

    echo “</VirtualHost>”

     

    2) FlaskApp.wsgi: establish Python app as wsgi web app 

    #!/usr/bin/python
    import sys
    import logging
    logging.basicConfig(stream=sys.stderr)
    sys.path.append(‘/var/www/YourWebAppBase/’)
    from MyPython import app as application

     

    3) wrapper.sh includes:¬† exporting the IBM Service Host name and IP into the environment, replacing HOST-REF with actual Service Hostname in the HTML file, creating a VH file with actual Service Host Name, create an Apache site for the VH, executing the Python web app…. and KEEP The Container Open. There are other methods for referencing the Service host name in the HTML and preventing the container from prematurely closing. Think about what works for your web app.¬†

    #!bin/bash
    echo “export Svc_Host=IBMHostName.com” >> ~/.bash_login
    echo “export Svc_IP=<IBM-Host-IP>” >> ~/.bash_login
    source ~/.bash_login
    sed -i “s/HOST-REF/$Svc_Host/g” /var/www/YourWebAppBase/templates/HTML.html
    ./apache-VH-conf.sh > /etc/apache2/sites-available/MyContainerCl.conf
    a2ensite MyContainerCl
    service apache2 restart
    python /var/www/ContClBase/ContClBase/MyContainerCl.py &
    exec /bin/bash -c “trap : TERM INT; sleep infinity & wait”

     

  6. Controller: Dockerfile

    Much of the advisable Dockerfile commands for the controller are covered in the recipe: https://developer.ibm.com/recipes/tutorials/bower-package-management-in-container-front-end-orchestration/

    The execution of a wrapper.sh is also necessary to properly run the container within a Kubernetes POD

  7. Controller: Command File

    The Controller Dockerfile executable is run as a Dockerfile.sh file

    wrapper.sh:  the actual IBM Service Hostname and IP address are exported into the environment. The Servive Hostname is exported to the hosts file. The actual hostname replaces HOST-REF in url references in the Angularjs code. The Container is prevented from prematurely closing. There are other methods for referencing the Service host name in the Angularjs and preventing the container from prematurely closing. Think about what works for your web app.

    #!bin/bash
    echo “export Svc_Host=e.2a.3da9.ip4.static.sl-reverse.com” >> ~/.bash_login
    echo “export Svc_IP=169.61.42.14” >> ~/.bash_login
    source ~/.bash_login
    echo “$Svc_Host $Svc_IP” >> /etc/hosts
    sed -i “s/HOST-REF/$Svc_Host/g” /var/www/html/YourAngular.js
    service apache2 restart
    exec /bin/bash -c “trap : TERM INT; sleep infinity & wait”

     

  8. NoSQL Dockerfile

    ############################################################
    # Dockerfile to build Mongodb container images
    # Based on Ubuntu
    ############################################################

    # Set the base image to Ubuntu
    FROM ubuntu:latest

    # File Author / Maintainer
    MAINTAINER Excellent IBM Programmer

    # Update the repository sources list
    RUN apt-get update

    ################## BEGIN INSTALLATION ######################
    #
    #install curl for debug purposes
    RUN apt-get install curl -y
    #########create directory structure########
    #
    RUN mkdir /MyAppBase
    #
    ############ intsall Mongodb Prerequisites #########################
    # import they key for the official MongoDB repository
    RUN apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv EA312927
    # add the MongoDB repository details
    RUN echo “deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse” | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
    RUN apt-get update -y
    ########### Install Mongo db #####################
    RUN apt-get install -y mongodb-org
    # Update the repository sources list once more
    RUN apt-get update -y
    #
    # expose mongodb default port
    ##############
    EXPOSE 27017
    #############
    # copy cmd wrapper script containing copying of host file info plus python script
    COPY mongod.conf mongod.conf
    COPY wrapper.sh wrapper.sh
    RUN chmod +x wrapper.sh
    RUN apt-get upgrade -y
    CMD ./wrapper.sh

     

  9. NoSQL: Support and Command Files

    mongod.conf:  essential DB path and port designation included here. More security should be added in real productiuon env

    # for documentation of all options, see:
    # http://docs.mongodb.org/manual/reference/configuration-options/

    # Where and how to store data.
    storage:
    dbPath: /var/lib/mongodb
    journal:
    enabled: true
    # engine:
    # mmapv1:
    # wiredTiger:

    # where to write logging data.
    systemLog:
    destination: file
    logAppend: true
    path: /var/log/mongodb/mongod.log

    # network interfaces
    net:
    port: 27017
    # bindIp: 127.0.0.1

    #processManagement:

    security:
    # authorization: ‘enabled’
    #operationProfiling:

    #replication:

    #sharding:

    ## Enterprise-Only Options:

    #auditLog:

    #snmp:

     

    wrapper.sh:  this container will remain open as Mongo wiats for queries

    #!bin/bash
    cp mongod.conf /etc/mongod.conf
    useradd -ms /bin/bash dbuser
    echo ‘dbuser:mypass’ | chpasswd
    mkdir data
    mkdir data/db
    mongod

  10. Conclusion

    As stated above; An operational Kubernetes Service involves the conceptual decomposition of a service idea into discrete components (containers),  associating Container Docker related files with a Kubernetes Service identity, connecting those services as necessary via intra-POD networking and making the app publicly available.  In this fashion, a containerized app may benefit from the HA imbeddded in Kubernetes deployed services. 

Join The Discussion