Win $20,000. Help build the future of education. Answer the call. Learn more

IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Service mesh and K3s technology progressions that are useful if you work with cloud-native projects

Every year, the Cloud Native Computing Foundation (CNCF) and The Linux Foundation organize large-scale events that are planned, delivered, and treasured by open source contributors, companies, and users across the world. This year, another successful KubeCon and CloudNativeCon Europe event was held from 17 – 20 August and it was conducted in full virtual mode, which made it even more special and collaborative. It included an enthusiastic audience and openhearted contributions that poured in from individuals and companies all over the world who are actively involved in many open source projects that are owned by the CNCF.

This event is a real festival for developers and technology enthusiasts: creators, contributors, and users. I attended this virtual summit and want to share two useful technology progressions that you must know if your projects involve cloud-native technologies.

Service mesh: No more just a mesh

Besides Kubernetes, CNCF service mesh projects, such as Linkerd, are very popular. This technology is no more than a buzzword or a fancy technology that is used for managing a mesh of microservices only. There are so many advanced use cases and developments that are happening around service mesh. On one side, it is getting ready for complex multicluster and hybrid cluster meshes, and on the other, there are horizontal use cases that are emerging and taking shape.

This is a prominent area that caught my attention during this year’s KubeCon and CloudNativeCon event. One of the horizontal use cases that I learned about are network functions through a service mesh. This project is named Network Service Mesh (NSM) and it is a CNCF sandbox project. This is of great interest for telco grade network services and network management functions.

NSM is an extension of service mesh that is inspired by the Istio service mesh. NSM provides “missing” Kubernetes networking capabilities between containers that are running services or with external endpoints. It provides these capabilities by using a simple set of APIs. It provides the ease and functionality of a service mesh, but for an L3 level of payloads and functionality. It provides a workload-to-workload granular level of functionality and a loosely coupled, heterogeneous network configuration.

Diagram of Network Service Mesh hybrid and multicloud connectivity and security

Source: Network Service Mesh

For example, the use of NSM requests for a new network functionality can be routed between heterogeneous workloads such as virtual machine (VM) networking to pod networking through a mesh of services that operate at the L2/L3 level. This can make network functionalities, connectivity between heterogeneous workloads across inter-cluster environments, and their provisioning much easier than before. It can also be highly useful for advancement in telecom and 5G use cases. Therefore, many telcos, such as Ericsson, Cisco, and Juniper Networks, are heavily investing and working on deep use cases of NSM. If this sounds interesting to you and you want to know more about NSM or other service mesh use cases, then do not miss watching the replay of the Building the cloud-native telco with Network Service Mesh talk from KubeCon and check out the Network Service Mesh project.

K3s: Small, yet powerful, baby of K8s

Use cases at the edge are the next big thing taking shape for good. At this year’s event, I was curious to know what the Kubernetes (K8s) or cloud-native communities are doing for that. Interestingly, I found the K3s project talks. In simple terms, K3s is a lightweight Kubernetes version. It is designed to support Kubernetes on an infrastructure with minimal resource availability or requirements. It is growing popular for two main design patterns, apart from many others that might evolve:

  1. For edge use cases, such as IoT devices, wind turbines, and set-top boxes.
  2. For having multiple small, self-sufficient clusters that can be used for CI/CD, development, and test setups, for instance.

K3s is evolving well for edge use cases where you mostly have CPU and memory constraints. There are some real merits in using K3s. For example, installing K3s is super easy and prepares a single node cluster for use with minimal effort. Helm charts are built into K3s, which makes it easier to perform installations. After installation, you can easily add more worker nodes by running k3s-agent on other machines and connecting it with the same K3s cluster by sharing some necessary secret tokens between the control plane and worker nodes that make it ready for use. It uses manifests and popular Kubernetes concepts, such as static pods that take care of bringing up the applications at the start of the cluster itself, which is how embedded software typically loads and starts functioning without manual intervention.

Diagram of how the K3s Server and Agent work

Source: K3s

K3s is very lightweight and provides a single binary whose size is as low as around 50 MB with a memory footprint of around 300 MB. Though it is becoming popular for edge use cases, it works well for small application clusters and even for pipelines (CI/CD). There is lot of excitement in the open source community since K3s was accepted as sandbox project by the CNCF in August. If you want to learn more, then watch the Running K3s, lightweight Kubernetes, in production for the edge and beyond talk from KubeCon and read the K3s project documentation.

Next steps

The KubeCon and CloudNativeCon North America 2020 virtual event is scheduled for 17-20 November. I encourage you to register and attend.