Overview

Skill Level: Intermediate

essential knowledge of IBM Cloud, Kubernetes,

This Recipe illustrates the use of the Cluster Node's kube-proxy.log, apache logs and NoSQL logs in POD containers to trace and verify inbound and local session activity in a multi-container POD.

Ingredients

IBM Cloud, Kubernetes, Kubelet, iptables and Kube-proxy activities

Step-by-step

  1. Review multi-container POD Architecture

    This recipe describes the expected session activity associated with an operational multi-container POD, consisting of a Python/Flask portal, an HTML view, an Angularjs Controller and a NoSQL backend.  A separate recipe Custom App as Multi-Container POD illustrates the operation of inbound (client-to-portal) session traffic and local POD (python app portal-to-backend) session traffic. This recipe illustrates how to use Node and Container logs to validate operational sessions and help resolve application session issues. 

  2. What's Going On Inside the POD: Inbound Client-to-Portal Session Readiness

    A setup needs to occur in order for the application service proxy to support inbound client-to-portal connections. How the service proxy setup is accomplished depends on whether iptables or IPVS is employed. IPVS is advantageous in supporting more efficient and flexible service proxy connection scheduling and has been available since Kubernetes 1.8.  However; the service proxy illustrated in this recipe is based on the default iptables mode. Further, the Kubernetes service mode in this example is NodePort.

     

    The following illustration from the Recipe Custom App as Multiple Containers described the Client to portal and Client to CSS/javacript file retrievals:

    inter_POD_3

     

    The Cluster Node’s kube-proxy.log can illustrate the Node’s recuitment of iptables to provide proxy capability:

    From the Kubernetes Node’s¬† /var/log/kube-proxy.log:

     kube-dal13-cr2f81880f89a446ee9cd40db885fe3469-w1 kube-proxy.service[7269]: I0503 11:31:54.839394    7269 server_others.go:138] Using iptables Proxier.

     

    The Kubernetes deployment requested a Service proxy, as below. iptables is responsible for providing functional Service ports:

    apiVersion: v1

    kind: Service

    metadata:

      name: portal-service

      labels:

        app: tool-portal

    spec:

      type: NodePort

      selector:

        app: tool-portal

      ports:

       Рname: model

         protocol: TCP

         port: 80

         nodePort: 30000

       Рname: control

         protocol: TCP

         port: 8081

         nodePort: 30100

     

    And as seen in /var/log/kube-proxy.log:

    ¬†kube-dal13-cr2f81880f89a446ee9cd40db885fe3469-w1 kube-proxy.service[1321]: I0718 17:15:30.416638¬† ¬† 1321 proxier.go:1754] Opened local port “nodePort for default/portal-service:control” (:30100/tcp)

     

    ¬†kube-dal13-cr2f81880f89a446ee9cd40db885fe3469-w1 kube-proxy.service[1321]: I0718 17:15:30.416760¬† ¬† 1321 proxier.go:1754] Opened local port “nodePort for default/portal-service:model” (:30000/tcp)

     

    At this point; the deployment Service is ready for inbound connections.

     

    Unless the above kube-proxy log entries are present in the instance of iptables use; inbound portal session traffic will not be successful. 

  3. Client Front End HTTP Session Stages

    The previously mentioned Recipe Custom App as Multiple Containers illustrated that the client will connect to the Python/Flask app  and then to the CSS and javascript repositories via separate restful sessions. 

    The client must conduct two distinct HTTP session stages:

    • initial Client-to-Portal HTTP session
    • Client CSS and javascript file retrievals

    Logging file verifications of both stages are illustrated in the following steps

     

  4. Front End Stages: Client-to-Portal Session

    The Kubernetes NodePort proxy service public address is a shim that the client’s web browser must direct HTTP requests towards, to access the app portal.

    The Kubernetes cluster node’s private ip address (10.187.93.141) forewards HTTP requests to the portal app container, which successfully responds to the client as follows (from the portal app’s Apache’s access.log):

    10.187.93.141 – – [01/Aug/2018:11:30:48 +0000] “GET / HTTP/1.1” 200 2273 “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

    The specifics of the portal app are described in Python/Flask portal app and Model/View/Controller as Portal App

    At this stage the Python/Flask portal app reponse to the client includes a bare HTML template.  The client must still pull the CSS and javascript files from the repositories referenced in the template, which is the next verifiable session stage.

  5. Front End Stages: Client Retrieval of portal app's CSS and javascript Files

    The portal app’s HTML template retrieved by the client (described above) contains references to CSS and javascript files that must be retrieved directly by the client so as to assemble a functional portal view for the client.

    The HTML template file will contain file references such as (from the author’s working portal app):

     

    <link href=”http://HOST-REF:30100/bower_components/bootstrap/dist/css/bootstrap.min.css” rel=”stylesheet”>

    <link href=”http://HOST-REF:30100/bower_components/bootstrap-css/css/bootstrap-theme.min.css” rel=”stylesheet”>

    • ¬†
    • ¬†
    • ¬†
    • ¬†

    <script type=”text/javascript” src=”http://HOST-REF:30100/bower_components/angular/angular.min.js”></script>

    <script type=”text/javascript” src=”http://HOST-REF:30100/bower_components/jquery/dist/jquery.min.js”></script>

    <script type=”text/javascript” src=”http://HOST-REF:30100/bower_components/bootstrap/dist/js/bootstrap.min.js”></script>

    <script type=”text/javascript” src=”http://HOST-REF:30100/bower_components/ng-ip-address/ngIpAddress.min.js”></script>

    <script type=”text/javascript” src=”http://HOST-REF:30100/app-ip.js”></script>

    <script type=”text/javascript” src=”http://HOST-REF:30100/announce_prefix3.js”></

     

    Each of these files must be retrieved from a repository, which in this app instance is a container in the same Kubernetes POD, called “control” , referenced in step “2” above.¬†

     

    Successful retrieval of these files can be verified via the Apache access.log file in the “control” container as below:

    10.187.93.141 – – [31/Jul/2018:12:56:52 +0000] “GET /bower_components/ng-ip-address/ngIpAddress.min.js HTTP/1.1” 200 1152 “http://e.2a.3da9.ip4.static.sl-reverse.com:30000/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

    10.187.93.141 – – [31/Jul/2018:12:56:52 +0000] “GET /bower_components/jquery/dist/jquery.min.js HTTP/1.1” 200 30661 “http://e.2a.3da9.ip4.static.sl-reverse.com:30000/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

    10.187.93.141 – – [31/Jul/2018:12:56:52 +0000] “GET /bower_components/bootstrap/dist/js/bootstrap.min.js HTTP/1.1” 200 14401 “http://e.2a.3da9.ip4.static.sl-reverse.com:30000/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

    10.187.93.141 – – [31/Jul/2018:12:56:52 +0000] “GET /bower_components/bootstrap-css/css/bootstrap-theme.min.css HTTP/1.1” 200 3114 “http://e.2a.3da9.ip4.static.sl-reverse.com:30000/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

    10.187.93.141 – – [31/Jul/2018:12:56:52 +0000] “GET /bower_components/bootstrap/dist/css/bootstrap.min.css HTTP/1.1” 200 21426 “http://e.2a.3da9.ip4.static.sl-reverse.com:30000/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

    10.187.93.141 – – [31/Jul/2018:12:56:52 +0000] “GET /bower_components/angular/angular.min.js HTTP/1.1” 200 59886 “http://e.2a.3da9.ip4.static.sl-reverse.com:30000/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

    10.187.93.141 – – [31/Jul/2018:12:56:52 +0000] “GET /app-ip.js HTTP/1.1” 200 523 “http://e.2a.3da9.ip4.static.sl-reverse.com:30000/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

    10.187.93.141 – – [31/Jul/2018:12:56:52 +0000] “GET /announce_prefix3.js HTTP/1.1” 200 1178 “http://e.2a.3da9.ip4.static.sl-reverse.com:30000/” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36”

     

    The above logs indicate that the referring page was in all cases the “model” container’s HTML template; proxied as port 30000 by the Kubernetes Service, as described in step “2” above.

     

    At this stage; the portal app inital page should be fully loaded on the client, with necessary javascript files loaded onto the client as well. 

  6. Backend Session Mechanism: Portal to NoSQL Queries

    The Python/Flask portal app must query or put documents into the backend NoSQL (MongoDB) container. The Dockerfile setup for this includes:

    From the Python (portal) app container:

    from pymongo import MongoClient, DESCENDING

    client = MongoClient(“mongodb://localhost:27017”)
    db = client.portalDB

    The MongoDB Dockerfile is set to expose port 27017 and Mongo listens natively on 27017. 

     

    Kubernetes cluster support for the portal-to-MongoDB sessions (or for any intra-POD sessions) is described in the illustration below, taken from Recipe Custom App as Multiple Containers

    intra_pod_ntwking_2

     

    Intra-POD networking is based on POD container localhost:port to POD container localhost:port

    The python app container executes NoSQL document insertion commands to the MongoDB container from localhost:high-port to localhost:27017

    The initial connection can be verified in the MongoDB container’s mongod.log:

    2018-08-02T11:09:58.271+0000 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:47582 #5 (5 connections now open)

     

    Subsequent document insertions originating from the portal app container are executed through the open connection and verifiable via the mongod.log:

    2018-08-02T11:09:58.277+0000 I COMMAND¬† [conn5] command portalDB.prefixinfo2 command: insert { insert: “prefixinfo2”, ordered: true, documents: [ { nexthop: “10.10.10.1”, localpref: “700”, aspath: “1000 1000”, prefix: “10.10.10.10”, opnotes: “test”, neighbor: “10.10.10.1”, _id: ObjectId(‘5b62e686844b520023c9fb04’) } ] } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } } } protocol:op_query 5ms

     

     

     

  7. Summary

    Verification of iptables support for inbound sessions and the subsequent verification of successful inbound connections is described in the steps above.

    Most issues leading to setup problems can be solved in either the Dockerfile’s port exposure, or Kubernetes deployment file.¬†

     

    In a similar fashion, the setup for intra-pod sessions has also been described. 

    As regards portal app to NoSQL document insertion issues; other MongoDB session problems can likewise be addressed by reviewing mongod.log entries. 

Join The Discussion