Overview

Skill Level: Intermediate

Broad intermediate

Portal apps can be implemented as separate Model/View, Controller and NoSQL backend containers communicating within a POD.
Other topics by the author in the "Automation Series" can be found at: https://developer.ibm.com/recipes/author/patemic/

Ingredients

Linux

IBM Cloud

Kubernetes

Kubectl skills

Docker

 

Step-by-step

  1. Docker Containers in K8s

    A portal app may be functionally decomposed into a data handler module called the “Model”, the user’s portal experience, or “View” and the linkage between the user’s portal activity and the Model, called the “Controller”.¬† A backend database is also normally used. These modules may be grouped or segmented into discrete Docker containers that operate as the user’s app. Kubernetes employs Docker resources (Dockerfiles) and provides scalability so that Docker containers may be clustered and support high availability.¬†¬†The target model for deploying the application within the IBM Cloud Kubernetes Service/Deployment method follows. Within each POD, modules are deployed as independent, unique containers as follows: a Model (Python/Flask) plus the View container, a Controller js container and a NoSQL (backend data) container.

    K8s_dply_pod_2

  2. Intra-POD Networking Architecture Fundamentals

    Kubernetes networking operates within two modes; intra-Pod and externally exposed.¬† The use of intra-Pod networking is necessary in instances where containers must communicate or exchange user data. The exchange of intra-pod data can be accomplished in multiple ways.¬† The method described in this recipe step is entirely via container-to-container,¬† socket-to-socket networking.¬† In this example a POD’s containers are constucted as follows:

    • Model and View function:¬† Flask, Python, Apache2 and the initial View HTML container
    • Controller function:¬† Angularjs and Apache2 container
    • Backend NoSQL function:¬† MongoDB container

    Intra-POD communications is accomplished via direct I/O between containers, each using the Node’s localhost, but also using unique, exposed container ports.

     

    Intra-POD Socket Concept

    Intra-POD Containers share the same IPv4 address: the localhost address of the compute Node that the POD resides in. Each POD container owns a unique port across which the container can communicate with other intra-POD containers.¬† Each container’s port should be exposed via the container’s Dockerfile and further associated with the container in the Kubectl YML configuration file.

    Model to NoSQL Data Exchange

    The function of the Model component in the Model/View/Controller architecture is user I/O handling.  In this capacity, I/O passed from the Model container (Python/Flask/Apache2) and the NoSQL (MongoDB) container is via intra-POD networking.
    To facilitate intra-POD communications between the Model container and the NoSQL container; the Model container exposes port 80 and the NoSQL container exposes native Mongo port 27017.¬† Both containers share the compute node’s IPv4 address.¬†

    Intra-POD networking between the Model and NoSQL containers is illustrated in the following graphic.

     

      intra_pod_ntwking_2

  3. Intra-POD Communication Configuration

    “Model” Dockerfile¬†

    In the Flask/Python module, Flask listens on native port 5000. However, Flask is not a production capacity app web server. A qualified reverse proxy (Apache in this example) is configured to serve REST queries to the Flask/Python app. In this example, Apache2 is configured to natively listen on port 80.  Therefore the Model Dockerfile is configured to expose port 80, as well. 

    NoSQL Dockerfile

    MongoDB is natively configured to listen on port 27017 and the Dockerfile is configured to expose 27017. 

     

    Kubectl Container Setup

    apiVersion: apps/v1beta1

    kind: Deployment

    metadata:

      name: tool-portal-pods  my example

    spec:

      replicas: 1

      template:

        metadata:

          name: tool-portal-pods my example

          labels:

            app: tool-portal my example

        spec:

          containers:

     Following container definitions appear here

    Kubectl Model Configuration

    The socket of the Model must be set up in the Kubectl configuration YML. The IPv4 address is (like all intra-POD containers) set to the compute Node’s localhost address. The Model’s port address is set to the exposed Dockerfile port.¬† From the Kubectl YML file:

             Рname: model

            image: registry.ng.bluemix.net/<your registry name>/model:Model

            imagePullPolicy: Always

            ports:

            РcontainerPort: 80  as exposed in Dockerfile

     

    Kubectl NoSQL Configuration

     

     

          Рname: mongodb

            image: registry.ng.bluemix.net/<your registry name here>/mongo:MongoDB

            imagePullPolicy: Always

            ports:

            РcontainerPort: 27017 as exposed in Dockerfile

     

    Kubectl Host Entry

     

        hostAliases:

    ¬†¬† ¬† ¬† – ip: <specify Node’s IPv4 address>

             hostnames:

    ¬†¬† ¬† ¬† ¬† – <specify Node’s registered hostname>

     The HostAlias sets up /etc/hosts file entries in all containers referencing the compute Node

     

    With the preceeding configuration steps, the “Model” and “MongoDB” containers are able to communicate directly within the POD, via referencing the localhost and the target container’s port.¬†

     

     

  4. External-POD Networking Fundamentals

    A reverse proxy is necessary for the Kubernetes app to receive REST queries from the outside.¬† Choices are available including Load balancing and Service Kubernetes configurations. The choice between the former will depend upon how PODs are distributed (or not) between compute nodes. In this example for simplicity, as per the Kubectl configuration, a single POD is created (in a single node) and the reverse proxy function is accomplished via declaring a Kubernetes “Service” object in the Kubectl file.¬† The Service object function is as follows:¬†

    inter_POD_3

    Reference the above graphic.¬† Initially; the client browser will send a REST query to retrieve the initial View from the Model (Flask/Python). The initial HTML that loads into the cleint browser will include js file references that should be loaded and executed on the client side. Included in these js files is the “Controller” js that is necessary for the View to execute ineractively with the client’s View activity.¬†

    So, the order of external REST queries to the Kubernetes POD follows:

    1. The client pulls the initial HTML file (View) via a RESTful query to the Flask/Python container
    2. The Initial View includes references to js files (including the Controller js) that must be pulled by the client.  These js files reside on (and are pulled from) the Controller container.  Note: these js files could just as well be stored and pulled from a GIT repository, BUT their presence within the POD simplifies Deployment upgrades/update rollouts. 

     

     

     

  5. External-POD Networking Details

    Controller Dockerfile

    The Controller container Dockerfile exposes port 8081.

    Controller Kubectl Configuration

    The Controller Container port setup in the Kubectl file follows:

    spec:

          containers:

     

          Рname: control

            image: registry.ng.bluemix.net/<your registry name here>/control:Control

            imagePullPolicy: Always

            ports:

            РcontainerPort: 8081

     

    Note: the Model’s Kubectl Configuration has already been described above.¬†

     

    Kubectl Service Object Configuration

    The Kubernetes Service object includes reverse proxy configurations to support the client browser RESTful query for the initial View and the client browser RESTful queries to pull all the necessary js files from the Controller container.

     

    The Kubectl Service object setup confguration follows: 

    apiVersion: v1

    kind: Service

    metadata:

      name: portal-service

      labels:

        app: tool-portal

    spec:

      type: NodePort

      selector:

        app: tool-portal

      ports:

      the Flask/Python and Controller port proxy configurations appear here

     

    In the following Service object proxy configlet, Kubernetes Service will allow inbound requests to the portal at port 30000. The Service object proxies the request to the Flask/Python module at port 80:

     

       Рname: model

         protocol: TCP

         port: 80

         nodePort: 30000

     

    In the following Service object proxy configlet, Kubernetes Service will allow inbound requests to the Controller container at port 30100. The Service object proxies the js file pulls to the Controller container at port 8081:

       Рname: control

         protocol: TCP

         port: 8081

         nodePort: 30100

  6. Next Stage: Source Code Considerations

    Essential Kubernetes and Docker configurations supporting intra-POD and External-POD networking have been described above.¬† Making application container source code compatible with the Kubectl configurations is the next Recipe in “the Automation Series”.¬†¬†

Join The Discussion