Following our blog about implementing a dedicated solution pattern, here we are going to describe the details of how to run containers workload securely in a Kubernetes cluster in IBM Cloud Kubernetes Service (IKS) without using any gateway router or firewall appliance for security aspects. We call this the Containers Workload Pattern, shown in the following diagram, and describe the following IKS features which enable you to run the workload securely as per your requirements:

  • Edge worker nodes
  • Secure non-edge worker nodes
  • Network policies to control traffic
  • VPN connectivity to your workload
  • Private Load Balancer and Ingress


The main points applicable for this pattern are:

  • You have only containers workload that you want to run in the cloud. I.e., you are not using any VSIs (Virtual Server Instances) or bare-metal servers.
  • You need to securely access your containers workload from your enterprise and/or public internet.
  • You need to have secure connectivity between your containers workload in multiple clusters.

Edge Worker Nodes

Edge worker nodes can improve the security of your Kubernetes cluster by allowing fewer worker nodes to be accessed externally and by isolating the networking workload in IBM Cloud Kubernetes Service. When these worker nodes are marked for networking only, other workloads cannot consume the CPU or memory of the worker node and interfere with networking.

In IKS Kubernetes cluster, you can mark nodes as edge nodes by adding the dedicated=edge label to two or more worker nodes. This will make sure that the ingress and load-balancers using the toleration dedicated=edge are deployed on those worker nodes only.

$ kubectl label nodes <node1_name> <node2_name> dedicated=edge

Then you can use Kubernetes taints to prevent other workloads from running on the edge worker nodes.

$ kubectl taint node <node_name> dedicated=edge:NoSchedule dedicated=edge:NoExecute

Non-Edge Worker Nodes

Whether you use edge nodes or not, you can secure the non-edge worker nodes by blocking inbound traffic to NodePort on worker nodes. Blocking NodePorts ensures that the edge worker nodes are the only worker nodes that handle incoming traffic.

Here is how to block inbound access to all public NodePorts.

  • For a Kubernetes version 1.10 or higher cluster, create a Calico v3 network policy file, say, deny-kube-node-port-services.yaml. Note that you can block incoming traffic to your services based on traffic source or destination. See the documentation for full details.
  apiVersion: projectcalico.org/v3

  kind: GlobalNetworkPolicy

  metadata:

    name: deny-kube-node-port-services

  spec:

    applyOnForward: true

    ingress:

    - action: Deny

      destination:

        ports:

        - 20000:32767

      protocol: TCP

      source: {}

    - action: Deny

      destination:

        ports:

        - 20000:32767

      protocol: UDP

      source: {}

    preDNAT: true

    selector: ibm.role in { 'worker_public', 'master_public' }

    types:

    - Ingress
  • Apply the Calico preDNAT network policy.
$ calicoctl apply -f deny-kube-node-port-services.yaml

Note that you can also block inbound access for NodePorts on the edge nodes as well if you don’t intend to use external NodePorts for anything, if you are only exposing services using Load Balancer or Ingress.

Network Policies

Every Kubernetes cluster in IKS is setup with default Calico and Kubernetes policies to secure the public interface of the worker nodes. If you have unique security requirements, you can use Kubernetes and Calico to create network policies for a cluster:

Configuring VPN

With VPN connectivity, you can securely connect apps in a Kubernetes cluster on IKS to an on-prem network. You can also connect apps that are external to your cluster to an app that is running inside your cluster. To connect your worker nodes and apps to an on-premises data center, you can configure the strongSwan IPsec VPN service. The strongSwan IPSec VPN service securely connects your Kubernetes cluster with either an on-premises network or another Kubernetes cluster in one of your other accounts. The strongSwan VPN service can also be used to provide access to non-kube resources, i.e., resources that are not exposed directly on the cluster itself.

Some corporate customers might limit VPN connections only to their corporate IPs. Some might even block all public inbound traffic and do VPN outbound to corporate and use private LBs/Ingress from corporate to their Kubernetes cluster.

Private Connectivity between Clusters in Same Account

Note that you don’t need VPN for connecting applications on different clusters in the same IBM Cloud account. The applications running on IKS Kubernetes cluster can be exposed using K8S Load Balancer and Ingress Controller services on IBM Cloud infrastructure private IP addresses, so that the application traffic does not go on public internet.

Private Load Balancer

You can expose your app deployed in an IKS cluster using a private IP address, by creating a Kubernetes resource as follows:

apiVersion: v1

kind: Service

metadata:

  name: myloadbalancer

  annotations:

    service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: private

spec:

  type: LoadBalancer

  selector:

    <selector_key>: <selector_value>

  ports:

   - protocol: TCP

     port: 8080

  loadBalancerIP: <private IP address>

Note the annotation service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: private and the loadBalancerIP which specify the properties to create a private LoadBalancer service.

See here for full details.

Private Ingress

Ingress is a Kubernetes service that balances network traffic workloads in your cluster by forwarding public or private requests to your apps. You can use Ingress to expose multiple app services to the a private network by using a unique private route. Expose multiple apps in your Kubernetes cluster by creating Ingress resources that are managed by the IBM-provided application load balancer in IKS.

The application load balancer (ALB) is an external load balancer that listens for incoming HTTP, HTTPS, TCP, or UDP service requests and forwards requests to the appropriate app pod. When you create a standard cluster, IBM Cloud Kubernetes Service automatically creates a highly available ALB for your cluster and assigns a unique public route to it. The public route is linked to a portable public IP address that is provisioned into your IBM Cloud infrastructure (SoftLayer) account during cluster creation. A default private ALB is also automatically created, but is not automatically enabled. You can enable the private ALB for your cluster, and disable the public ALB if you wish.

First, list the ALBs.

$ bx cs albs --cluster <cluster_name>

OK

ALB ID                                            Enabled   Status     Type      ALB IP   

private-cr8c75d6cffda848349ef7144043474547-alb1   false     disabled   private   -   

public-cr8c75d6cffda848349ef7144043474547-alb1    true      enabled    public    169.61.32.238   

Use the ALB ID of your private and public ALBs in the following commands to enable private ALB and disable public ALB.

$ bx cs alb-configure --albID public-cr8c75d6cffda848349ef7144043474547-alb1 --disable

Configuring ALB...

OK

$ bx cs alb-configure --albID private-cr8c75d6cffda848349ef7144043474547-alb1 --enable

Configuring ALB...

OK

Check by listing ALBs again that they’re configured as we desired.

$ bx cs albs --cluster <cluster_name>

OK

ALB ID                                            Enabled   Status     Type      ALB IP   

private-cr8c75d6cffda848349ef7144043474547-alb1   true      enabled    private   10.73.78.238   

public-cr8c75d6cffda848349ef7144043474547-alb1    false     disabled   public    -   

See IKS documentation for enabling a default private ALB for full details.

Next, you expose you apps by creating an ingress resource, using a yaml as follows:

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

  name: myingressresource

  annotations:

    ingress.bluemix.net/ALB-ID: "<private_ALB_ID>"

spec:

  rules:

  - host: <domain>

    http:

      paths:

      - path: /<app1_path>

        backend:

          serviceName: <app1_service>

          servicePort: 80

      - path: /<app2_path>

        backend:

          serviceName: <app2_service>

          servicePort: 80

See IKS documentation for exposing apps to a private network for full details.

That was a quick summary of key features of IBM Cloud Kubernetes Service that enable secure containers workload patterns for our customers. For more details, look at comprehensive description of IKS security aspects. We recommend you start with the Get Started with Kubernetes and IKS page. Also, for an understanding of the container network security concepts, see this recent article – Network Security: A critical component for securing your containers.

Join The Discussion

Your email address will not be published. Required fields are marked *