2021 Call for Code Awards: Live from New York, with SNL’s Colin Jost! Learn more

Secure a Hyperledger Fabric sample app with a custom CA and deploy it to a Kubernetes cluster

This tutorial shows you how to enable Transport Layer Security (TLS) for key communications between the nodes of a Hyperledger Fabric network that is deployed in a Kubernetes cluster on Red Hat Enterprise Linux (RHEL). We chose RHEL as a target distribution because it’s one of the most common enterprise-class Linux distributions for running production systems in a corporate environment featuring availability of support services.

By following these steps, you’ll be able to simulate a scaled-down version of a production network in your local development environment. In order to do this, it’s essential that you secure the communication within the cluster with real public key infrastructure (PKI) signed certificates. TLS’s use of encryption protects the transfer of data and information. In addition to TLS configuration, this tutorial also shows how different digital certificates obtained from a PKI are primed into a Fabric blockchain network.

Learning objectives

  • Set up a single-node Kubernetes cluster on Red Hat Enterprise Linux with primary and secondary nodes running on the same physical single-server node.
  • Deploy a Hyperledger Fabric blockchain network.
  • Modify the default cluster configuration to integrate custom TLS for communication among the Fabric network components.
  • Deploy a simple application to test that the network and TLS are properly configured.
  • Replicate the configuration steps using digital certificates issued by a real-world public key infrastructure (PKI) for your Hyperledger Fabric applications running on Kubernetes.


You’ll need the following software installed. The steps in this tutorial will walk you through the installations.

Basic knowledge of the following will help you get the most out of this tutorial:

  • Linux administration, preferably of Red Hat Enterprise Linux
  • Public key cryptography
  • Hyperledger Fabric and related tooling such as crypto-gen, configtxgen, and configtxlator

Don’t panic! You don’t need to be an expert on these topics. In this tutorial, we provide insights on aspects of these technologies where understanding is key to moving to the next step. If you’re ready, let’s buckle up and push ahead!

Estimated time

Completing this tutorial should take about 4 hours.

Getting started

Container orchestration technologies, such as Kubernetes, are essential to running modern microservices applications in production. These technologies do the heavy lifting when it comes to deploying, connecting, scaling, and making containerized applications available in a production network.

When you decompose applications into a set of distinct interoperating services, securing the communication among these services becomes as important as securing access from external clients or services. The introduction of certificates that are signed by well-known certificate authorities (CAs) is a very common approach to securing communications among parties, regardless of whether they are internal or external to the solution.

Proof-of-concept and pilot applications often start with self-run Fabric network deployments using sample configurations that are meant for a development setting, as opposed to utilizing a business-grade blockchain-as-a-service platform. As these solutions transition to production, operating a secure implementation of the Fabric network on a chosen container orchestration platform becomes a primary concern.

While there is extensive documentation available on the internet about Kubernetes, and even more documentation on Transport Layer Security, we found very little information on how to configure a Kubernetes cluster with custom TLS for Hyperledger Fabric blockchain applications. We recently faced this challenge while porting a vanilla Hyperledger deployment, designed for local development, to a production-grade QoS settings. The lack of a step-by-step recipe on how to make this transition is what motivated us to write this tutorial and share the experience and learnings we undertook during the process.

Tutorial material

In addition to these step-by-step instructions, we are also providing all of the materials you need to repeat the steps discussed in this tutorial at the following repository: https://github.com/hyp0th3rmi4/hlf-k8s-custom-crypto

The repository contains the Kubernetes manifests and the associated configuration files needed to set up a sample Fabric network (2 organizations, 2 CAs, and 2 peers per organization) with a clustered Kafka-based orderer and with transport layer security for communication among the nodes of the network.

For node identity and transport layer security, non-production Fabric networks generally utilize digital certificates that are issued by the organization CAs, which themselves use self-signed root certificates. These are commonly generated by the cryptogen tool supplied by Fabric. But in production scenarios, these certificates are either directly signed by or in the trust chain of known public PKI. This tutorial shows you how to integrate such certificates in place of those issued and signed by the organizational CAs of the Fabric network.

Sample application

In this tutorial, you’ll use the command-line end-to-end application (e2e_cli) as a test application for validating your network configuration. This tutorial demonstrates the challenges related to setting up Kubernetes and running a Fabric application in a setup that is similar to a production deployment. The sample application demonstrates all the basics of Fabric and acts as a useful test for your setup — but is not overly complex.

As we show you the setup for this application, you can use the same approach for your own applications.

“The Wizard of Oz” and Kubernetes to the rescue

“Toto, I have a feeling we’re not in Kansas anymore.”

This famous line from the movie “The Wizard of Oz” is often used as a metaphor for communicating the discomfort that many people feel when they are in an unusual place or condition. Very much like Dorothy in the movie, as you step from your development environment into a production setting, you may find yourself dealing with a completely different set of requirements in terms of scalability, availability, application monitoring, traceability, and security. In the era of microservices, this problem is exacerbated by the increased number of components that you need to coordinate, connect, and wire together. This is true for most applications and definitely relevant for applications that are built on top of Hyperledger, which already starts with a baseline of several containers that need to be deployed just so you have a toy network to play with.

The usual single-file setup deployments based on Docker Compose (such as those provided by Yeasy) are not sufficient anymore, so we need more production-grade approaches. This is where Kubernetes comes to rescue.

Kubernetes is a platform for running containerized applications on clusters that can be built out of heterogeneous infrastructure. Out of the box, it provides support for rolling updates, scaling up and down services, controlling the routing of traffic to different versions of services, and highly available setups. Kubernetes allows you to have a fine degree of control over your deployment, and it does so through a collection of system components that complement and support the operations of the container engine (such as Docker, rkt, or others). This complexity enables you to have a lot of power and flexibility, but it can also be overwhelming if you don’t know the basic concepts. Figure 1 gives an overview of the architecture of a Kubernetes cluster. This tutorial briefly illustrates the key concepts that can help you understand the next steps.

Figure 1. Kubernetes architecture


Kubernetes follows a primary-secondary architecture: The primary node coordinates the activity of the cluster, manages its state, and schedules workloads across the secondary/worker nodes based on both user-defined requirements and the internal status of the cluster. The functions of the primary node are implemented through a collection of services that can be distributed onto different nodes. As happens for the primary node, the secondary/worker node is composed of a collection of services that are shown in Figure 1.

In addition to these key architectural components, there are other abstractions that Kubernetes uses to orchestrate the execution of containers in the cluster. For this tutorial, we’re most interested in the following:

  • Pods: Pods are the minimal deployment units that Kubernetes uses, and they are designed for hosting tightly coupled containers (one or more). The main container represents an application service, and supporting containers are used to monitor its lifecycle and connectivity to the other services. A pod is meant to be a stateless deployment unit that can be terminated and rescheduled for execution on any node of the cluster.

  • Services: A service usually represents a meaningful entity for the deployed application, and it can span across multiple pods, which all perform the same function. In the Kubernetes world, a service is the entry point to the corresponding capabilities deployed in the pod, and acts as a load balancer across all of the mapped deployment.

  • Deployments: Deployments specify the mapping of Kubernetes services onto pods. While a service defines the mapping to node ports, the corresponding deployment contains information about the containers that are required by the service and the number of replicas to deploy.

  • Labels: Labels are a mechanism used by Kubernetes to append additional information to any of the entities used in the cluster. They are used for implementing cluster functions and made available to the end user to enrich the metadata associated with a service, a deployment, or any other entity. For example, labels are used to pair service definitions with the corresponding deployments.

  • Proxies: Proxies play a fundamental role in making the services deployed in a pod available to the other components of the architecture. The function of a proxy is implemented by the kube-proxy component illustrated in Figure 1, which has responsibility for subnetting at the host level and exposing services outside the node according to the manifest that defines the service.

  • Namespaces: Namespaces are a mechanism for creating multiple virtual clusters within the same physical cluster. This abstraction also provides the boundary in terms of scope and visibility of services; all services are deployed within a namespace and can reach all the other services in the same namespace. Kubernetes comes with three default namespaces: default, kube-system, and kube-public. The first one is the default namespace, the second is used to deploy system services, and the third is a special namespace that’s used to deploy those resources that need to be visible and reachable across all clusters.

  • Domain name service (DNS): In addition to the basic cluster-based name resolution capabilities, Kubernetes allows for the integration of an additional DNS resolution. This is essentially a service that is deployed onto the cluster as any other application-level service. The only difference is that it is deployed in the system namespace and has privileged access to some of the Kubernetes system components.

It is beyond the scope of this tutorial to describe every component of the architecture. For a more complete overview of Kubernetes, look at the Kubernetes documentation or tutorials such as the one hosted at Digital Ocean.

1. Get Kubernetes up and running on RHEL

The first thing to do is to prepare your environment to run Kubernetes-based applications. We chose Red Hat Enterprise Linux (RHEL) as a target distribution because it is one of the most common Linux distributions for running production systems within a corporate environment. Our goal is to provide a fully functional single-node Kubernetes cluster based on Red Hat Enterprise Linux.

Red Hat Enterprise Linux offers OpenShift as a default solution for container orchestration. OpenShift is based on Kubernetes and provides additional services to the core capabilities of the framework in terms of networking, multi-tenancy, image registry, logging, and monitoring. The platform is fully compatible with Kubernetes: Services and deployments defined for a Kubernetes cluster can be transparently deployed on OpenShift. OpenShift is available as an open source platform and as an enterprise offering with 24/7 support. As a result, Kubernetes and Docker are not readily available with the RHEL distribution and need to be installed manually.

In order to have a fully functional working Kubernetes cluster, you need to install two components: Docker Enterprise Edition and Kubernetes.

a. Install Docker Enterprise Edition

  1. Remove any previous Docker versions on your system:

     sudo yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
                      docker-selinux \
                      docker-engine-selinux \
                      docker-engine \
  2. Locate your Docker EE repository that will be used for the installation. This can be done by accessing your online profile of the Docker Store, where all of the trials and subscriptions available to you are listed:

    • Login to https://hub.docker.com/my-content.
    • Click Setup for Docker Enterprise Edition for Red Hat Enterprise Linux.
    • Copy the URL from Copy and paste this URL to download your edition.

      We will refer to this URL as <DOCKER_REPO_URL>. The following procedure will configure your environment to download Docker EE from the above repository.

      # remove existing docker repositories
      sudo rm /etc/yum.repos.d/docker*.repo
      # store the url to the docker repository as a YUM variable
      sudo -E sh -c 'echo "$DOCKER_URL/rhel" > /etc/yum/vars/dockerurl'
      # store the OS version string as a YUM variable. We assume here that RHEL
      # version is 7. You can also use a more specific version.
      sudo -E sh -c 'echo "7" > /etc/yum/vars/dockerosversion'
      # install the additional packages required by the devicemapper storage driver
      sudo yum install yum-utils \
                     device-mapper-persistent-data \
      # enable the extras RHEL repository. This provides access to the
      # container-selinux package required by docker-ee
      sudo yum-config-manager --enable rhel-7-server-extras-rpms
      # add the Docker EE stable repository
      sudo -E yum-config-manager --add-repo "$DOCKER_URL/rhel/docker-ee.repo"
  3. Install Docker Enterprise Edition:

     sudo yum install docker-ee
     sudo systemctl start docker

    You may be prompted to accept the GPG key. Verify that the fingerprint matches the following value, and if it does then accept it. Your installation of Docker EE is now complete.

    Fingerprint: 77FE DA13 1A83 1D29 A418 D3E8 99E5 FF2E 7668 2BC9

  4. Test that your Docker installation is working. You can run the hello-world container for verification:

     sudo docker run hello-world

    If the installation of your Docker runtime is successful, you should see the following output:

     Pulling from library/hello-world
     9bb5a5d4561a: Pull complete
     Status: Downloaded newer image for hello-world:latest
     Hello from Docker!
     This message shows that your installation appears to be working correctly.
     To generate this message, Docker took the following steps:
      1. The Docker client contacted the Docker daemon.
      2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
      3. The Docker daemon created a new container from that image which runs the
         executable that produces the output you are currently reading.
      4. The Docker daemon streamed that output to the Docker client, which sent it
         to your terminal.
     To try something more ambitious, you can run an Ubuntu container with:
      $ docker run -it ubuntu bash
     Share images, automate workflows, and more with a free Docker ID:
     For more examples and ideas, visit:

More details on how to install Docker EE on Red Hat Enterprise Linux are covered in the Docker documentation.

b. Install Kubernetes

Note: The Kubernetes releases occur often, rendering the following instructions dated rather quickly. You should refer to the official Kubernetes documentation on installation at the link below if you encounter any problems with these instructions.

  1. Install Kubernetes base packages:

    • kubelet: This is the essential Kubernetes service that performs the basic functions of node management, such as the creation of pods and container deployment.
    • kubeadm: This package is responsible for bootstrapping a single-node Kubernetes cluster that is compliant with the best practices established by the Kubernetes conformance tests.
    • kubectl: This package installs the command-line tool that is used to interact with the Kubernetes cluster.

      This script configures YUM with the Kubernetes repository and installs the required packages:

      # configure YUM to access the Kubernetes repository
      sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      # disable SELinux, you need to do this in order to allow containers to access
      # the file system, this is needed for instance by pod networks
      sudo setenforce 0
      # install the packages
      sudo yum install -y kubelet kubeadm kubectl
      # enable the kubelet service
      sudo systemctl enable kubelet
      sudo systemctl start kubelet
  2. Configure the cgroup driver of Docker. Since you are using Docker EE as a container runtime, you need to be sure that kubelet uses the same cgroup driver as Docker. You can verify that the driver is the same with the following commands:

     docker info | grep -i cgroup
     cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

    If the cgroup driver is different, modify the configuration of the kubelet to match the cgroup driver used by Docker. The flag you need to change is --cgroup-driver. If it’s already set up, you can change it as follows:

     sudo sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

    After the update, you will need to restart the kubelet:

     sudo systemctl daemon-reload
     sudo systemctl restart kubelet
  3. Bootstrap the cluster and configure it to run within a single node. This is where kubeadm comes into play. To bootstrap your cluster, simply issue the following command:

     sudo kubeadm init --pod-network-cidr=

    The additional flag, --pod-network-cidr, is required by the network plugin that you are going to add later. The value of this flag depends upon the specific network plugin that you are going to install. In this case, you will be installing Calico CNI.

  4. Copy the Kubernetes configuration into your home directory. This step allows a non-root user (the current user) to deploy services onto the cluster.

     mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config
  5. Install a network add-on. Kubernetes supports multiple networking plugins. For this tutorial, you will be using the Calico CNI plugin as described in the Quickstart guide.

     # install the etcd service...
     kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/etcd.yaml
     # install the role-based access control (RBAC) roles...
     kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
     # install the role-based access control (RBAC) roles...
     kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/calico.yaml

    If the installation is successful, when you type the following command —

     kubectl get pods --all-namespaces

    — you should see an output similar to the following:

     NAMESPACE    NAME                                READY  STATUS   RESTARTS  AGE
     kube-system  calico-etcd-x2482                   1/1    Running  0         2m
     kube-system  calico-kube-controllers-6f8d4-tgb   1/1    Running  0         2m
     kube-system  calico-node-24h85                   2/2    Running  0         2m
     kube-system  etcd-jbaker-virtualbox              1/1    Running  0         6m
     kube-system  kube-apiserver                      1/1    Running  0         6m
     kube-system  kube-controller-manager             1/1    Running  0         6m
     kube-system  kube-dns-545bc4bfd4-67qqp           3/3    Running  0         5m
     kube-system  kube-proxy-8fzp2                    1/1    Running  0         5m
     kube-system  kube-scheduler                      1/1    Running  0         5m
  6. Remove the restriction on the primary node. By default, Kubernetes does not allow you to schedule pods on the primary node. To run a single-node cluster, you need to remove this restriction. You can do this by untainting the primary node with the following command:

     kubectl taint nodes –all node-role.kubernetes.io/master-

    If the command executes successfully, you should see the following message:

     node “<your-hostname>” untainted

    You can confirm that your cluster now has a node by doing the following:

     kubectl get nodes

    This command should show one entry in the table that is returned.

  7. Test your Kubernetes installation:

     kubectl run my-nginx --image=nginx --replicas=2 --port=80

    This command creates a deployment for running the NginX web server on two pods and exposing the service on port 80. If the command executes successfully, you should be able to see one deployment and two pods running as shown here:

     $ kubectl get deployments
     my-nginx   2         2         2            2           15s
     $ kubectl get pods
     NAME                       READY     STATUS        RESTARTS   AGE
     my-nginx-568fcc5c7-2p22n   1/1       Running       0          20s
     my-nginx-568fcc5c7-d6j6x   1/1       Running       0          20s

    You can remove the test deployment by issuing the following command:

     kubectl delete deployment my-nginx

For more details on setting up the Kubernetes cluster with kubeadm, check out the Kubernetes documentation.

2. Install Hyperledger Fabric and configure it for Kubernetes

We chose version 1.1.0 of Hyperledger Fabric for this tutorial given its general availability at the time of this writing. However, the methods described to achieve the goals of this tutorial are equally applicable for future releases.

a. Download tutorial material and Hyperledger Fabric

Downloading the Git repository specified in the “Tutorial material” section above equips you with all the code objects you need to run a Fabric network. Run the following set of commands to clone the repository, establish a working directory and retrieve the Fabric 1.1.0 Docker images:

git clone https://github.com/hyp0th3rmi4/hlf-k8s-custom-crypto
cp -r hlf-k8s-custom-crypto/* /home/hlbcadmin/Downloads/mysolution/fabric-e2e-custom/
cd /home/hlbcadmin/Downloads/mysolution/fabric-e2e-custom/
./download-dockerimages.sh -c x86_64-1.1.0 -f x86_64-1.1.0

b. Create Hyperledger Fabric manifests for Kubernetes

Hyperledger Fabric comes with preconfigured Docker Compose files that can be used to bring up a blockchain network on a local machine running Docker. To run the same network on Kubernetes, you need to generate the corresponding Kubernetes manifests. We used the sample end-to-end command-line interface configuration that’s available with the Hyperledger Fabric distribution.

You can perform this operation by using the kompose command-line tool, which automatically applies the translation for you, or perform the process manually. For each of the services that are mentioned in the Docker Compose file, you need to create two artifacts: a service and a deployment. The service and the deployment are connected by a selector label. The listing below shows an example of the translation of the fabric-zookeeper0 component into the corresponding service and deployment manifests required by Kubernetes:

# fabric-zookeeper0-service.yaml

apiVersion: v1
kind: Service
    kompose.cmd: ./kompose convert -f docker-compose-e2e.yaml
    kompose.version: 1.12.0 (0ab07be)
  creationTimestamp: null
    io.kompose.service: fabric-zookeeper0
  name: fabric-zookeeper0
  - name: "2181"
    port: 2181
    targetPort: 2181
  - name: "2888"
    port: 2888
    targetPort: 2888
  - name: "3888"
    port: 3888
    targetPort: 3888
    io.kompose.service: fabric-zookeeper0
  loadBalancer: {}

# fabric-zookeeper0-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
    kompose.cmd: ./kompose convert -f docker-compose-e2e.yaml
    kompose.version: 1.12.0 (0ab07be)
  creationTimestamp: null
    io.kompose.service: fabric-zookeeper0
  name: fabric-zookeeper0
  replicas: 1
  strategy: {}
      creationTimestamp: null
        io.kompose.service: fabric-zookeeper0
      - env:
        - name: ZOO_MY_ID
          value: "1"
        - name: ZOO_SERVERS
          value: server.1= server.2=zookeeper1.kopernik.ibm.org:2888:3888 server.3=zookeeper2.kopernik.ibm.org:2888:3888
        image: hyperledger/fabric-zookeeper
        name: zookeeper0
        - containerPort: 2181
        - containerPort: 2888
        - containerPort: 3888
        resources: {}
          - name: ndots

The Docker Compose file used for the translation is docker-compose-e2e.yaml, which is available for download as an attachment to this tutorial. The network that’s defined in the file consists of the following:

  • 2 organizations with two peers each
  • 2 MSPs, one per organisation
  • 1 orderer
  • 1 persistence layer for the ledger based on CouchDb (4 servers, 1 per peer)
  • 1 Kafka cluster of 4 nodes
  • 1 ZooKeeper cluster of 3 nodes

The full conversion of the reference configuration generates the following artifacts, which you can find in the git repository:

  • fabric-zookeeper<N>-service.yaml and fabric-zookeeper<N>-deployment.yaml for the zookeeper cluster components
  • fabric-kafka<N>-service.yaml and fabric-kafka<N>-deployment.yaml for the zookeeper cluster components
  • fabric-ca1.yaml and fabric-ca2.yaml for the MSPs (service and deployment)
  • fabric-orderer.yaml for the orderer (service and deployment)
  • fabric-peer<N>-org<M>.yaml for the peers (service and deployment)
  • fabric-couchdb.yaml for the peers ledger databases (services and deployments)
  • fabric-cli-job.yaml and fabric-cli-deployment.yaml, the fabric CLI

For convenience, Kubernetes allows you to either combine multiple manifests into a single file or keep them separate. In this case, you will group together the service manifest and the deployment manifest for the core components of the blockchain network (MSP, orderer, and peers) while the corresponding manifests for the ZooKeeper and Kafka components are kept separate. Finally, you will also combine all of the manifests of the persistence layer (CouchDb servers).

c. Connect Docker and Kubernetes networks

You generated the Kubernetes manifests from setups that are meant to work natively with Docker. As a result, they rely on the Docker networking to resolve hosts. When it comes to running such setups in Kubernetes, you can run into the problem of running the same containers on a different network. Hence, the natural hostname resolution procedure does not work out of the box. In particular, this is a problem for the chaincode container that is spun by the peer. This still uses the Docker engine (and not Kubernetes) to create the container, while all the other containers in the network have been deployed through Kubernetes.

In the Fabric architecture, a peer runs a smart contract (chaincode) in an isolated container environment, and a peer can achieve this by directly deploying the container through the Docker daemon’s Unix socket interface. This chaincode container needs to have a way to contact the peer so that it can be managed by it, but a DNS query to the Docker network for the peer that is running on Kubernetes cannot be resolved. You need to configure the chaincode container with the DNS server IP address of the Kubernetes cluster so that it can call home. For every peer in the network, you need to specify the environment variable CORE_VM_DOCKER_HOSTCONFIG_DNS, which is used to inject the IP address of the DNS server into the chaincode container during start-up. The code snippet below shows the configuration for one of the peers in the network (reference: fabric-peer0-org1.yaml).

      - env:
          value: "" # this must be replaced with actual IP of the
                              # Kubernetes DNS service

The code below shows how to extract the IP address of the Kubernetes DNS service:

$ kubectl get svc kube-dns -n kube-system

kube-dns   ClusterIP   <none>        53/UDP,53/TCP   18d

The above approach represents a valuable way of bridging the traffic between the Docker container virtual overlay network and the Kubernetes network such that name resolution between the respective network components gets resolved successfully in a discoverable and robust manner. The peer and chaincode components maintain isolation and yet are tethered to each other through network connectively that is independent of their container orchestration method and runtime environments. Given the nature of the application communication protocol between peer and chaincode, their lifecycles are tightly aligned — for example, when the server-side component is stopped the client-side component that is connected to or associated with the server-side component is shut down properly without getting orphaned or hanging. This kind of network bridging also reduces the vulnerability of the Docker-in-Docker approach to launching a chaincode container from a peer.

3. Enable custom Transport Layer Security

This section focuses enabling the Transport Layer Security for communications among the different components of the Fabric network. As previously mentioned, we decided to write this tutorial because we needed to replicate a scaled-down version of a production setting in a local development environment. To that end, securing the communication within a cluster with a real PKI is an essential step.

a. Hyperledger Fabric TLS basic setup

Hyperledger Fabric has the built-in capability to enable TLS as part of the services that make up the fabric: orderers, peers, and certificate authorities (CAs). By default, these components retrieve their TLS settings from the configuration files that are built inside the corresponding images. You can find the default configuration settings for peer and orderer in the sample configuration folder of the Hyperledger Fabric repository:

  • For the peer nodes (hyperledger/fabric-peer), the default configuration settings are located in the core.yaml file.
  • For the orderer nodes (hyperledger/fabric-orderer), the default configuration settings are located in the orderer.yaml file.

These settings can be overridden by the environment variables that are passed to the containers during start-up, which map into the configuration settings that you want to change. For instance, for the peer container you can define the following:

  • CORE_PEER_TLS_ENABLED (=true|false)
  • CORE_PEER_TLS_CERT_FILE: path to the certificate file to use for the peer
  • CORE_PEER_TLS_KEY_FILE: path to the key file associated with the certificate
  • CORE_PEER_TLS_ROOTCERT_FILE: path to the root CA certificate file

Similarly, for the orderer you can control the TLS setting by defining the following environment variables:

  • ORDERER_GENERAL_TLS_CERTIFICATE: path to the certificate file to use for the orderer
  • ORDERER_GENERAL_TLS_PRIVATEKEY: path to the key file associated with the certificate
  • ORDERER_GENERAL_TLS_ROOTCAS: path(s) to the root CA certificate file(s)

For the certificate authorities, rather than having a static file that stores the default settings, the configuration is generated dynamically when the CA container is started. You can modify these settings with the following environment variables:

  • FABRIC_CA_TLS_ENABLED (=true|false)
  • FABRIC_CA_TLS_CERTFILE: path to the certificate file to use for the CA
  • FABRIC_CA_TLS_KEYFILE: path to the key file associated with the certificate

By using these settings and properly mapping the container names to the fully qualified domain names used in the corresponding certificates, you can fully configure the Fabric components with TLS. You can find the configured network files for using transport layer security in the working directory fabric-e2e-custom.

b. Procure PKI certificates

Cryptogen is the default utility that is shipped with Hyperledger Fabric to create all the required certificates for the Fabric network. These certificates are self-signed and you want to replace them with custom certificates that are signed by a real PKI. In this tutorial, you will be using certificates that are provisioned through the IBM internal certificate authority, but the steps are essentially the same for certificates that are obtained by a PKI such as Digital Ocean, Verisign, or Symantec.

Before you start using custom TLS, you need to know how many certificates are required and where to put them. This essentially comes down to understanding how the cryptogen tool works and matching its behaviour with the provisioning of corresponding real PKI certificates (if you choose to keep the cryptogen-generated crypto material lay down consistent with custom provisioned crypto material lay down). This tool is responsible for creating the crypto material for all the identities in the network and it is driven by the crypto-config.yaml file, which determines how many certificates and associated private keys are required. You can find the crypto-config.yaml for the sample application in the root of the fabric-e2e-custom working directory:

  - Name: Orderer
    Domain: kopernik.ibm.org
         Country: US
         Province: California
         Locality: San Francisco
      - Hostname: orderer

  - Name: Org1
    Domain: org1.kopernik.ibm.org
    EnableNodeOUs: true
         Country: US
         Province: California
         Locality: San Francisco
      Count: 2
      Count: 1

  - Name: Org2
    Domain: org2.kopernik.ibm.org
    EnableNodeOUs: true
         Country: US
         Province: California
         Locality: San Francisco
      Count: 2
      Count: 1

The configuration you will use is a slight alteration of the default setup for Hyperledger Fabric: In line with its default behaviour, you create two organizations and one separate organization for the orderer nodes. Each of these organizations has a separate certificate authority (CA) that manages the certificates that represent user and system identities in that organization. You only change the names of the organizations to keep them in line with the certificates that you will be provisioning.

The crypto-config.yaml file also provides instructions on how many peer nodes to create in each organization. This is controlled by the Template.Count parameter that defines the maximum number of peers to create in the organization. Without any further specification of the hostname template the name is created by using the fabric-peer<N>-<org> template where N ranges from 0 to Template.Count-1. Similar to the Template section, the User section controls the number of identities that are created for each organization. This setup will create the following identities:

  • orderer node
  • CA and administrator for the orderer organization
  • 2 peer nodes, 1 CA, 1 administrator, and 1 user for each peer organization

Besides the identities that are used at the organization level, Hyperledger Fabric also requires identities in the form of x509 certificates for the management of network connections to implement TLS.

The set of cryptographic identities described above is part of the Membership Service Provider (MSP) configuration. This is the Fabric component that’s responsible for providing network participating member identity definition, identity validation, and identity authentication involving signature generation and verification steps. This information is conveniently arranged in the folder structure shown in Figure 2.

Figure 2. Basic MSP folder structure


The folder structure also highlights the existence of intermediate certificates for both the organization-level identities and the network-level ones, which sign the certificates that are used rather than relying directly on the root certificate.

Figure 3. MSP folder structure with signing material


This information is used by the different components of a Hyperledger Fabric network and does not contain any sensitive information (i.e. private keys). For those entities that need to sign or endorse transactions, two additional folders are generated: keystore and signercerts. These contain the private key and the associated certificate for signing, respectively. These folders are of significance for the nodes (orderer or peer) and the users. Figure 3 shows the updated folder structure for such entities.

Figure 4. cryptogen-generated folder structure


This is all the information that cryptogen generates when it automatically sets up the certificates for a Hyperledger Fabric network. With reference to the crypto-config.yaml file previously shown, cryptogen will generate the folder structure shown in Figure 4. The msp folders reflect the structure mentioned above. Those shown in bold also contain the keystore and signercerts folders. The information stored in those folders is used by the nodes to endorse transactions, and by the users to sign submitted transactions.

Figure 4 also shows the presence of a tls folder both for each of the nodes and each of the users. These folders contain the crypto material that’s used by the different entities to establish a TLS connection.

In summary, for the configuration shown in crypto-config.yaml, cryptogen creates the certificates shown in Table 1.

Table 1. Crypto material generated through cryptogen


The table shows the set of certificates that are required by any Hyperledger Fabric network to operate (ref: Blockchain rows) and those that are specifically needed to secure the communications with TLS (ref: TLS rows).

If you decide to utilize a real PKI to secure the network communications, you can use the same PKI to provision the certificates for the organizations, their components, and users. The very same certificates used to secure network communications can be used at the application layer, thus reducing the total number of certificates to be provisioned. Table 2 describes which certificates to provision and how to map them to the corresponding certificates in the folder structure.

Table 2. Certificate and key mapping into the folder structure generated by cryptogen

alt alt

This process can be automated and the script stored in the scripts/script.sh folder automatically copies all the certificates stored in the ibm-files source directory into the appropriate path under the crypto-config directory tree.

c. Get TLS to work on Kubernetes

The configuration settings discussed above are not sufficient to make your Kubernetes setup work as there are fundamental differences in how service resolution is implemented in a Docker Compose network and in a Kubernetes cluster.

By default, Kubernetes defers service discovery capabilities to add-ons. As a result, the base installation of Kubernetes does not provide capabilities for services to look each other up and interact. This is not a big problem because there are several add-ons that can be used to provide this capability, one of the most popular being the DNS add-on KubeDNS. You can deploy add-ons in your cluster by using the add-on manager, or by manually deploying the service as follows:

sudo kubectl create -f <pointer-to-kube-dns.yaml>

KubeDNS is a service that lives within the kube-system namespace and it is responsible for performing cluster-wide service resolution and resolution of external DNS names via upstream name servers. When a service manifest is deployed, Kubernetes creates a corresponding DNS entry for the service in the cluster whose fully qualified name is <service-name>.<namespace>.cluster.local. This allows you to reference the service as <service-name> from any service that’s deployed in the same namespace, and with the fully qualified name from any service that is deployed in other namespaces in the same cluster. Because the fully qualified name of the service is automatically derived, Kubernetes does not allow you to add dots in the service name.

Installing KubeDNS gives you a mechanism for looking up services, but you don’t have complete freedom in assigning “dotted names” to the services that correspond to the fully qualified names of the certificates you may want to use. This causes even the simplest Fabric network to not work out of the box, as the different services have fully qualified names under the domain example.org; when the different components try to establish connections to each other, the connection fails because of the name mismatch between the service name and the corresponding certificate.

To address this problem, you can try a couple of different approaches:

  • Use certificates whose names match the service names
  • Use the Subject Alternative Name (SAN) field of the certificate to store the name of the service

Neither approach is particularly attractive. The first prevents you from actually using a real PKI with a proper domain name, while the second attaches to the certificate details related to the deployment of the service within the cluster.

d. CoreDNS to the rescue

A more powerful approach to service name resolution is the add-on that integrates CoreDNS. This is a very flexible implementation of a Domain Name Server that can be extended with several plugins, allowing the DNS administrator to perform any operation in terms of resolution of domain names and their management.

CoreDNS can be configured using a Corefile, which allows you to define multiple DNS servers, and activate and configure plugins for each of them. Here is an example of the configuration file:

.:53 {
   rewrite {
   kubernetes … {
   prometheus :9153
   proxy . /etc/resolv.conf
   cache 30

The listing above shows a DNS server that is authoritative for the name resolution of any domain that responds to port 53. The server configuration block activates the log, errors, health, prometheus, proxy, cache, rewrite, and kubernetes plugins. The log, errors, and health plugins all have obvious functions, but the others deserve a bit more attention. The prometheus plugin is configured to allow exposing the metrics of CoreDNS and of all the plugins that support this protocol via the /metrics endpoint on the same node and on port 9153. The proxy plugin is configured to rely on the local file /etc/resolv.conf to resolve the DNS names. The cache plugin is configured to retain a DNS entry for 30 seconds.

The rewrite plugin has interesting CoreDNS capabilities that can be useful here: It enables you to write domain-name translation maps that resolve the name mismatch problems experienced with KubeDNS. By rewriting domain names, you can reference a service with its fully qualified name matching the certificate and translate it into the fully qualified name of the service used cluster. For instance, you could rewrite the expected names of the services as follows:

rewrite name peer0.org1.kopernik.ibm.org fabric-peer0-org1.default.svc.cluster.local

This configuration enables the translation of the query, but the response will still be based on the original name of the service. As a result, this still results in a naming conflict with the certificate name associated with the service. To achieve complete transparency, you need to translate back the returned DNS name, mapped to the IP of the original service name. You can do this by adding a directive to translate the DNS answer and group them together:

rewrite {
  name peer0.org1.kopernik.ibm.org fabric-peer0-org1.default.svc.cluster.local
  answer fabric-peer0-org1.default.svc.cluster.local peer0.org1.kopernik.ibm.org

In order for CoreDNS to interact with Kubernetes, you also need to configure the Kubernetes plugin. This plugin tells CoreDNS how to process the DNS resolution request within the cluster. The setup that is needed for this use case is shown here:

 kubernetes kopernik.ibm.org in-addr.arpa ip6.arpa {
    pods insecure
    fallthrough in-addr.arpa ip6.arpa

You can find the add-on for CoreDNS in the collections of add-ons in the Kubernetes distribution under the “dns” category. To deploy CoreDNS, you can use the following commands:

# only needed if kube-dns was previously installed
sudo kubectl remove kube-dns

sudo kubectl create -f <pointer-to-coredns.yaml>

Note: Since Kubernetes release 1.11, clusters created with kubeadm by default use CoreDNS for service discovery so they do not need the above configuration.

To configure CoreDNS, you need to pass the Corefile to the service you just deployed. You can do this by defining a ConfigMap for the service and deploying it to the cluster. A ConfigMap is essentially a key-value store that can be associated with any abstraction that Kubernetes uses.

The ConfigMap containing the Corefile configuration that provides the name translation table required to enable the connectivity for the deployed Fabric network can be found in the file coredns-config.yaml in the fabric-e2e-custom folder. You can deploy the configuration for CoreDNS as follows:

sudo kubectl create -f coredns-config.yaml

This command replaces the configuration map that was previously deployed with the service and enables CoreDNS to transparently translate domain names.

4. Launch the network and the sample application

Now execute the following script located in the working folder to deploy the network and run a set of tests to verify that the network is functional:


This script creates:

  • services and deployments for the Kafka cluster
  • services and deployments for the ZooKeeper cluster
  • services and deployments for the CouchDb databases of the peers
  • 1 service and 1 deployment for the orderer service
  • services and deployments for all the peers
  • 1 job and 1 deployment for the CLI container

The CLI container performs a set of tests to verify that your Hyperledger Fabric network is working correctly by attempting to:

  • contact the orderer service
  • create a channel named mychannel
  • let all the peers join the channel
  • install and instantiate a test chaincode
  • execute a query and invoke transactions

If the test is executed successfully, the output of the script should contain the following message at the end of its execution:

================== All GOOD, End-2-End execution completed ===================

You can find more details about the tests in the script.sh bash file located in the fabric-e2e-custom/scripts folder. This is the main entry point of the CLI container.

You have now successfully deployed a Hyperledger Fabric on the Kubernetes cluster and verified that the network is properly configured.


In this tutorial, you learned how to set up a single-node Kubernetes cluster on Red Hat Linux Enterprise and how to deploy Hyperledger Fabric on it. We also showed you how to modify the default cluster configuration, specifically the networking layer and DNS configuration, to integrate custom TLS. This is an essential requirement for enabling your cluster to use real-world PKI. As a demonstration of a fully functional setup, you deployed the command-line end-to-end application that’s available with Hyperledger Fabric. Although it’s quite simple, this application exercises all the basic functionalities that test your TLS configuration.

You should now be able to replicate the configuration steps required to support real-world PKI for your Hyperledger Fabric applications running on Kubernetes.