Deploy ONAP Beijing on a private cloud

As more telecom companies are exploring ways to meet the insatiable bandwidth demand, implementing network function virtualization (NFV) holds great promise to lower costs – and at the same time to speed up the process to deliver new services. However, to make NFV a reality requires that all industry players to work together to find common software-based solutions.

Open Network Automation Platform (ONAP) is an open-source project that seeks to unify not just the carrier community, but also the vendors and integrators, around an automation platform for NFV. In addition, ONAP also provides an ONAP Operations Manager (OOM) to facilitate deployment of ONAP onto Kubernetes-based clouds. IBM Cloud Private is based on Kubernetes and is designed to be deployed on an enterprise data center, so running ONAP on IBM Cloud Private is a good match for a portable and scalable on-premises solution.

ONAP provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions. Developers and providers of software, network, IT, and cloud use ONAP to rapidly automate new services and support complete lifecycle management.

OOM is responsible for the lifecycle management of the ONAP platform itself. It uses the open-source Kubernetes container management system as a means to manage the Docker containers that compose ONAP. The containers are hosted either directly on bare-metal servers or on virtual machines (VMs) hosted by a third-party management system.

IBM Cloud Private is an integrated environment for managing containers that includes the container orchestrator Kubernetes, a private image registry, a management console, and monitoring frameworks. It offers a community edition, IBM Cloud Private-CE, which provides a limited offering that is available at no charge and ideal for test environments.

Learning objectives

This tutorial shows you steps to deploy ONAP (Beijing release) on IBM Cloud Private Community Edition version (with Kubernetes 1.10) using OOM. It describes the customization steps that are required to bring all ONAP components up and running on IBM Cloud Private Community Edition.


  • Two Ubuntu 16.04.3 servers running on bare metal or VMs.

Estimated time

Total time: approximately 2.5 hours

  • Set up IBM Cloud Private: approximately 1 hour

  • Prepare the environment for the ONAP installation: approximately 30 minutes

  • Deploy ONAP on IBM Cloud Private: approximately 1 hour

  • Apply workarounds to fix failing ONAP components: approximately 10 minutes

Set up IBM Cloud Private – Community Edition

To run ONAP on IBM Cloud Private, you need to have a multi-node IBM Cloud Private environment with Docker using the AUFS storage driver. This section shows you how to set up IBM Cloud Private from scratch.

  1. Prepare the environment for IBM Cloud Private

    ONAP consists of more 150 pods, and IBM Cloud Private itself has about 50 pods. Given the maximum number of pods allowed per node, you need a multi-node IBM Cloud Private cluster to host all ONAP and IBM Cloud Private pods. At minimum (and for demonstration purposes), I recommend a three-node IBM Cloud Private environment running Ubuntu 16.04.3 – one master and two workers.

    Follow the steps in Configuring your cluster to prepare the nodes for setting up IBM Cloud Private. At the last step (Provide Docker in your cluster), choose the option to install Docker manually. (See the next section.)

  2. Install Docker with AUFS

    IBM Cloud Private requires Docker. Although the IBM Cloud Private installation can automatically install Docker, it configures Docker with the default Overlay2 storage driver. However, due to the [Limitations on OverlayFS compatibility] (, some ONAP components that use MySQL do not work with Overlay2. They require using the AUFS storage driver. Therefore, before you install IBM Cloud Private, you need to install and configure Docker manually to use AUFS storage driver.

    To install Docker with AUFS storage driver, run the following commands on all the nodes (both master and workers)

     # Install linux-image-extra package for the aufs storage driver
     sudo apt-get update && sudo apt-get -y upgrade
     sudo apt-get install -y linux-image-extra-`uname -r`
     # Add repo from Docker official repository
     sudo apt-key adv --keyserver hkp:// --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
     echo "deb ubuntu-xenial main" | sudo tee /etc/apt/sources.list.d/docker.list
     # Install docker
     sudo apt-get update
     sudo apt-get install -y docker-engine
     # Verify docker is up and running
     /etc/init.d/docker status
     # If docker is not up, restart it
     /etc/init.d/docker restart
     # Verify docker is using aufs storage driver
     sudo docker info
     # Verify docker is installed correctly by running the hello world image
     sudo docker run hello-world
     # Repeat all the steps above for each node in your cluster
  3. Install IBM Cloud Private

    Now, Docker is ready on all nodes, follow the procedures in [Installing IBM Cloud Private-Community Edition] ( to set up IBM Cloud Private with a multiple worker nodes configuration.

Prepare for ONAP installation

You can run ONAP installation from anywhere that can access the IBM Cloud Private environment. For this tutorial, you start the ONAP installation from the IBM Cloud Private master node. Before running the ONAP installation, prepare the environment as described in the following steps.

  1. Install and upgrade the Helm client.

    The ONAP Operation Manager (OOM) uses Helm to deploy ONAP components on Kubernetes. To install and set up the Helm client on the master node, follow the procedures in Setting up the Helm CLI. This procedure installs the same version of Helm 2.7.3 that is included in IBM Cloud Private.

    ONAP Operation Manager requires Helm v2.8.2 or more recent versions. To upgrade the Helm client on the master node and the Helm server in IBM Cloud Private, first download the Helm 2.8.2 from the Helm Github.

     # After downloading helm v2.8.2 from above link, unpack it
     sudo tar -zxvf helm-v2.8.2-linux-amd64.tar.gz
     # Upgrade Helm server(Tiller) to v2.8.2
     cd linux-amd64
     sudo mv helm /usr/local/bin/helm
     helm init --upgrade
     # Check helm version
     helm version --tls
  2. Work around the Helm with the tls option.

    For security reason, IBM Cloud Private requires all Helm commands to use the –tls flag. However, the helm commands in the deployment scripts of OOM do not contain the –tls flag. Instead of modifying the OOM scripts, use the following steps.

     # Append helm version to the executable filename
     sudo mv /usr/local/bin/helm /usr/local/bin/helm-v282
     # Create a new helm script to call this helm-v282 executable
     sudo vi /usr/local/bin/helm
     # Add the following lines into the new /usr/local/bin/helm script
     if [ "$1" = "delete" ] || [ "$1" = "del" ] ||
        [ "$1" = "history" ] || [ "$1" = "hist" ] ||
        [ "$1" = "install" ] ||
        [ "$1" = "list" ] || [ "$1" = "ls" ] ||
        [ "$1" = "status" ] ||
        [ "$1" = "upgrade" ] ||
        [ "$1" = "version" ]
       /usr/local/bin/helm-v282 "$@" --tls
       /usr/local/bin/helm-v282 "$@"
     # Add execute permission for all users who can access this file
     sudo chmod +x /usr/local/bin/helm
     # Verify --tls is appended to every helm command call
     helm version
  3. Install the Kubernetes CLI client.

    ONAP Operation Manager (OOM) uses kubectl to connect to Kubernetes, or IBM Cloud Private in this case.

    To install kubectl on the master node, run the following commands:

     sudo curl -L -o /usr/local                /bin/kubectl
     sudo chmod +x /usr/local/bin/kubectl
  4. Configure kubectl to use service account token as access credentials.

    Get the existing secret name of the service account:

     $ kubectl get secret
     NAME                  TYPE                                  DATA      AGE
     calico-etcd-secrets   Opaque                                3         19h
     default-token-b9pfk   3         19h

    Write down the secret name of your service-account-token (for example, default-token-b9pfk in the previous example) and run the following command with it:

     $ kubectl config set-credentials mycluster-user --token=$(kubectl get secret <your-token-secret-name> -o jsonpath=        {.data.token} | base64 -d)
     User "mycluster-user" set.
  5. Increase the virtual memory allocation on all worker nodes.

    Some ONAP components require significant amount of virtual memory. You need to increase the virtual memory allocation on all IBM Cloud Private worker nodes.

    Log in to each worker node, and run the following commands:

     sudo sysctl vm.max_map_count
     # If the vm.max_map_count value is not at least 262144,
     # run the following commands
     echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
     sudo sysctl -w vm.max_map_count=262144
  6. Set up an NFS shared directory on each worker node.

    ONAP components use a common directory /dockerdata-nfs as storage and to share data. In your multi-node IBM Cloud Private environment, you must set up a NFS server to export the shared directory and mount it on all worker nodes so that the ONAP components running on different nodes can all access it.

    To create the shared directory and set up the NFS server, run the following commands on the master node:

     # Create the shared directory
     sudo mkdir -p /dockerdata-nfs
     # Install NFS kernel server
     sudo apt update
     sudo apt install -y nfs-kernel-server
     # Update /etc/exports
     sudo echo "/dockerdata-nfs *(rw,no_root_squash,no_subtree_check)" | sudo tee -a /etc/exports
     # Restart NFS kernel server
     sudo service nfs-kernel-server restart

    To mount the shared directory on the worker nodes, run the following commands on all worker nodes:

     # Install NFS client
     sudo apt update
     sudo apt install -y nfs-common
     # Create the directory for the mount point
     sudo mkdir /dockerdata-nfs
     sudo chmod 777 /dockerdata-nfs
     # Mount the shared directory
     sudo mount -t nfs -o proto=tcp,port=2049 <hostname-or-IP-address-of-master-node>:/dockerdata-nfs /dockerdata-nfs
     # Update /etc/fstab
     echo "<hostname-or-IP-address-of-master-node>:/dockerdata-nfs /dockerdata-nfs   nfs    auto  0  0" | sudo tee -a     /etc/fstab

Deploy ONAP using OOM

To deploy ONAP on IBM Cloud Private, you use the continuous deployment script.

  1. Get the ONAP deployment script.

    To download the script and the OOM master release, run the following commands on the master node:

     mkdir ~/onap
     cd ~/onap
     # Get the script
     chmod +x
     # Set up the local/onap helm charts used in for ONAP master branch
     git clone
     cd oom/kubernetes
     helm serve &
     make all
     # Get ONAP parameters values file
     cd ~/onap

    The script pre-pulls all the docker images. Because you are using a multiple worker nodes environment and running on the master node, pre-pulling all the images onto the master node doesn’t provide many advantages. So, optionally, you can comment out the image pre-pulling code in the script.

  2. Deploy ONAP on IBM Cloud Private.

    Now, you are ready to deploy the master release of ONAP using

     ./ -b master

    It takes about 30 minutes for the script to complete.

    However, it takes more time for all the ONAP components to be up and running. Wait another 30 minutes to an hour before it is stable.

    To check the status of the ONAP pods on IBM Cloud Private, run the following command:

     # Check the status of all the ONAP pods after the script completed
     kubectl get pods -n onap
     kubectl get pods -n onap | grep -v Running

Fix the failing ONAP components

At the end of the installation, if you found pods that are in “Init:Error” state, verify if there is a pod for the same job in “Completed” state. If so, you can ignore those Init:Error pods.

$kubectl get pod -n onap | grep Init:Error
onap-portal-db-config-4mcxr                      0/2       Init:Error         0          3h
onap-sdc-be-config-backend-c5ws6                 0/1       Init:Error         0          2h
onap-sdc-be-config-backend-ffvrx                 0/1       Init:Error         0          3h
onap-sdc-be-config-backend-r7shq                 0/1       Init:Error         0          3h
onap-sdc-onboarding-be-cassandra-init-fqw87      0/1       Init:Error         0          3h
vid-config-galera-dq47x                          0/1       Init:Error         0          3h

# Repeat the checking for all pods in Init:Error state
$kubectl get pod -n onap | grep Completed | grep onap-portal-db-config
onap-portal-db-config-4j4hw                      0/2       Completed    0          3h

In most of our tests, all the pods that were in Init:Error state has a corresponding pod in Completed state.

Next, check if there is any other pod failed to run:

$kubectl get pod -n onap | grep CrashLoopBackOff
onap-clamp-7d69d4cdd7-g4f26                      1/2       CrashLoopBackOff   45         3h

Complete the following steps to fix the ONAP Clamp component.

The cause for the onap-clamp to fail is that it did not have enough cpu resource to execute the job. To fix it, increase its cpu request and redeploy the component:

vi ~/onap/oom/kubernetes/clamp/values.yaml
# Change the resources cpu limits from 1 to 2 and
# cpu requests from 10m to 1
    cpu: 2
    memory: 1.2Gi
    cpu: 1
    memory: 800Mi

# Update the helm chart in the local repository
cd ~/onap/oom/kubernetes
make all

# Deploy the changes
helm upgrade onap local/onap

At this point, you should have ONAP running on your IBM Cloud Private cluster.


This tutorial described the steps to deploy ONAP Beijing on an IBM Cloud Private cluster. Now you can try working in your own environment to deploy an on-premises ONAP solution, realizing the cost and speed benefits of NFV.