Taxonomy Icon

Hybrid Cloud

Open Network Automation Platform (ONAP) is an open source project that provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions. It enables software, network, IT, and cloud providers and developers to rapidly automate new services and support complete lifecycle management.

The ONAP Operations Manager (OOM) is responsible for life-cycle management of the ONAP platform itself. It uses the open-source Kubernetes container management system as a means to manage the Docker containers that compose ONAP, where the containers are hosted either directly on bare-metal servers or on virtual machines (VMs) hosted by a third party management system.

IBM Cloud Private is an integrated environment for managing containers that includes the container orchestrator Kubernetes, a private image registry, a management console, and monitoring frameworks. The community edition, IBM Cloud Private-CE, provides a limited offering at no charge, ideal for test environments.

Learning objectives

In this tutorial, you learn the steps to deploy ONAP (the Amsterdam release) on IBM Cloud Private-CE using OOM. This tutorial also describes customizations and specific steps that are required to bring all ONAP components up and running on IBM Cloud Private-CE.

Prerequisites

You need two Ubuntu 16.04.3 servers running on bare metal or VMs.

Estimated time

Total time: approximately 3.0 hours

  • Set up IBM Cloud Private-CE: approximately 1 hour

  • Prepare the environment for ONAP installation: approximately 30 minutes

  • Deploy ONAP on IBM Cloud Private-CE: approximately 1 hour

  • Apply workarounds to fix failing ONAP components: approximately 30 minutes

Steps

1. Set up IBM Cloud Private-CE

To run ONAP on IBM Cloud Private, we need a multi-node IBM Cloud Private environment with Docker using an AUFS storage driver. This section shows you how to set up IBM Cloud Private from scratch.

1.1. Prepare the environment for IBM Cloud Private

ONAP consists of more than 80 pods, and IBM Cloud Private has about 50 pods. Given the maximum number of pods allowed per node, it requires a multi-node IBM Cloud Private cluster to host all ONAP and IBM Cloud Private pods. At minimum and for demonstration purpose, use a 2-node IBM Cloud Private environment running Ubuntu 16.04.3 – one master and one worker.

Follow the steps in Configuring your cluster to prepare the nodes for setting up IBM Cloud Private. At the last step – Provide Docker in your cluster, choose the option to install Docker manually. (See the next section.)

1.2. Install Docker with AUFS

IBM Cloud Private requires Docker. Although IBM Cloud Private installation can automatically install Docker, it configures Docker with the default Overlay2 storage driver. However, due to the Limitations on OverlayFS compatibility, some ONAP components that use MySQL don’t work with Overlay2. They require using the AUFS storage driver. Therefore, install and configure Docker manually to use AUFS storage driver before you install IBM Cloud Private.

To install Docker with AUFS storage driver, run the following commands on all the nodes (both master and workers):

# Install linux-image-extra package for the aufs storage driver
apt-get update && apt-get -y upgrade
apt-get install linux-image-extra-`uname -r`

# Add repo from Docker official repository
sudo apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" | sudo tee /etc/apt/sources.list.d/docker.list

# Install docker
apt-get update
apt-get install docker-engine

# Verify docker is up and running
/etc/init.d/docker status

# If docker is not up, restart it
/etc/init.d/docker restart

# Verify docker is using aufs storage driver
docker info

# Verify docker is installed correctly by running the hello world image
sudo docker run hello-world

# Repeat all the steps above for each node in your cluster

1.3. Install IBM Cloud Private-CE

Now, Docker is ready on all nodes, follow the procedures in Installing IBM Cloud Private-CE to set up IBM Cloud Private with multiple worker nodes configuration.

2. Prepare for ONAP installation

ONAP installation can be run from anywhere that can access the IBM Cloud Private environment. For this tutorial, launch the ONAP installation from the IBM Cloud Private master node. Before running the ONAP installation, complete following steps to prepare the environment.

2.1. Install and configure the Kubernetes CLI client

ONAP Operation Manager (OOM) uses kubectl to connect to Kubernetes, IBM Cloud Private in this case.

To install kubectl on the master node, run the following commands:

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.6/bin/linux/amd64/kubectl

chmod +x ./kubectl

sudo mv ./kubectl /usr/local/bin/kubectl

Configure kubectl on the master node to access your IBM Cloud Private cluster:

  1. Log in to your IBM Cloud Private cluster using the management console

  2. Click the user icon in the upper right corner of the management console, and select Configure client.

  3. Copy the configuration commands that are displayed.

  4. Open a terminal window to the master node. Paste and then run the configuration commands that you copied in the previous step.

    kubectl on the master node is now set up to access your IBM Cloud Private cluster, but this configuration expires in 12 hours. To make the configuration last longer, you need to configure kubectl to use service account token with the following steps.

  5. Get the existing service account secret name, with the following command:

     $ kubectl get secret
     NAME                  TYPE                                  DATA      AGE
     calico-etcd-secrets   Opaque                                3         19h
     default-token-b9pfk   kubernetes.io/service-account-token   3         19h
    

    Write down the secret name of your service-account-token (for example, the default-token-b9pfk in the previous example) and use it in the next step.

  6. Use the service account token as your access credentials.

    Run the following command with the service account token secret name you got from the previous step:

     $ kubectl config set-credentials admin --token=$(kubectl get secret <your-token-secret-name> -o jsonpath={.data.token} | base64 -d)
     User "admin" set.
    

2.2. Install Helm

OOM uses Helm to deploy ONAP on Kubernetes.

To install Helm on the master node, run the following commands:

wget http://storage.googleapis.com/kubernetes-helm/helm-v2.6.0-linux-amd64.tar.gz

tar -zxvf helm-v2.6.0-linux-amd64.tar.gz

sudo mv linux-amd64/helm /usr/local/bin/helm

helm version

2.3. Increase the virtual memory allocation on all worker nodes

Some ONAP components require significant amount of virtual memory. It is required to increase the virtual memory allocation on all IBM Cloud Private worker nodes.

Log in to each worker node, and run the following commands:

sudo echo "vm.max_map_count=262144" >> /etc/sysctl.conf

sudo sysctl -w vm.max_map_count=262144

2.4. Set up a NFS shared directory on each worker node

ONAP components use a common directory called /dockerdata-nfs as storage and to share data. In your multi-node IBM Cloud Private environment, you need to set up a NFS server to export the shared directory and mount it on all worker nodes so that the ONAP components running on different nodes can all access it.

To create the shared directory and setup the NFS server, run the following commands on the master node:

# Create the shared directory
sudo mkdir -p /dockerdata-nfs

# Install NFS kernel server
sudo apt update
sudo apt install nfs-kernel-server

# Update /etc/exports
sudo echo "/dockerdata-nfs *(rw,no_root_squash,no_subtree_check)" >> /etc/exports

# Restart NFS kernel server
sudo service nfs-kernel-server restart

To mount the shared directory on the worker nodes, run the following commands on all worker nodes:

# Install NFS client
sudo apt update
sudo apt install nfs-common -y

# Create the directory for the mount point
sudo mkdir /dockerdata-nfs
sudo chmod 777 /dockerdata-nfs

# Mount the shared directory
sudo mount -t nfs -o proto=tcp,port=2049 <hostname-or-IP-address-of-master-node>:/dockerdata-nfs /dockerdata-nfs

# Update /etc/fstab
sudo echo "<hostname-or-IP-address-of-master-node>:/dockerdata-nfs /dockerdata-nfs   nfs    auto  0  0" >> /etc/fstab

3. Deploy ONAP using OOM

To deploy ONAP on IBM Cloud Private, you use the continuous deployment cd.sh script.

3.1. Get the ONAP deployment script

To download the cd.sh script and the OOM Amsterdam release, run the following commands on the master node:

mkdir onap
cd onap
curl https://raw.githubusercontent.com/obrienlabs/onap-root/master/cd.sh > cd.sh
chmod +x ./cd.sh
git clone -b amsterdam http://gerrit.onap.org/r/oom
cp oom/kubernetes/config/onap-parameters-sample.yaml onap-parameters.yaml

Currently, the DCAE component of ONAP only runs as OpenStack VMs. It isn’t ported to containers yet. For this tutorial, yous disable DCAE in the installation by setting DEPLOY_DCAE to false in onap-parameters.yaml:

sed -i 's/DEPLOY_DCAE: \"true\"/DEPLOY_DCAE: \"false\"/' onap-parameters.yaml

The cd.sh pre-pulls all the docker images. Because you are using a multiple worker nodes environment and running cd.sh on the master node, pre-pulling all the images onto the master node doesn’t provide many advantages. Optionally, you can comment out the image pre-pulling code in cd.sh.

3.2. Deploy ONAP on IBM Cloud Private

Now, you are ready to deploy the Amsterdam release of ONAP using cd.sh:

./cd.sh -b amsterdam

It takes about 30 minutes for the script to complete. However, it will take some time for all the ONAP components to become up and running. It is recommended to give it another 30 minutes to an hour to become stable. To check the status of the ONAP pods on IBM Cloud Private, run the following commands:

# Check the status of the config pod while the script is running
kubectl describe pod config -n onap

# Check the status of all the ONAP pods after the script completed
kubectl get pods --all-namespaces | grep onap

4. Fix the failing ONAP components

At the end of the installation, the pods for a few ONAP components are not in running state:

$kubectl get pod --all-namespaces | grep 0/
onap-aaf              aaf-3751521006-dgfbp                                      0/1       Running            0          16h
onap-kube2msb         kube2msb-registrator-609107926-rdpm9                      0/1       CrashLoopBackOff   166        16h
onap-portal           vnc-portal-215252621-tdp8h                                0/1       CrashLoopBackOff   167        16h

Follow the steps below to fix them.

4.1. Fix the ONAP kube2msb component

The ONAP kube2msb component needs a token to connect to Kubernetes to check on other ONAP components. A sample token is hardcoded in the OOM kube2msb-registrator deployment code. You need to replace it with a valid token in your cluster.

On the master node where you run cd.sh, run the following commands to update the kubeMasterAuthToken in kube2msb-registrator deployment with the same service account token used in step 2.1, and redeploy the kube2msb component.

# Copy the token of the admin user from kube config
vi ~/.kube/config

# Replace the value of kubeMasterAuthToken with the token copy from ~/.kube/config file in the above step
vi oom/kubernetes/kube2msb/values.yaml

# Delete the failing kube2msb-registrator deployment
oom/kubernetes/oneclick/deleteAll.bash -n onap -a kube2msb

# Wait until the kube2msb-registration deployment is deleted
kubectl get pod -n onap-kube2msb

# Launch kube2msb-registrator deployment with the correct kubeMasterAuthToken
oom/kubernetes/oneclick/createAll.bash -n onap -a kube2msb -l oom/kubernetes

# Verify kube2msb-registration deployment is running
kubectl get pod -n onap-kube2msb

4.2. Fix the ONAP portal component

There are problems with the Amsterdam version of the OOM portal deployment code. It has been updated in the master branch. To fix the problem, replace the OOM portal deployment code with the code in the master branch. Then re-deploy the portal component:

# Delete all the deployments in the portal component
oom/kubernetes/oneclick/deleteAll.bash -n onap -a portal

# Wait until all portal deployments are deleted
kubectl get pod -n onap-portal

# Get the OOM Portal code from the master branch
git clone http://gerrit.onap.org/r/oom /tmp/oom
mv oom/kubernetes/portal oom/kubernetes/portal-Amsterdam
cp -rf /tmp/oom/kubernetes/portal oom/kubernetes
rm -rf /tmp/oom

# Re-create all the deployments in the portal component
oom/kubernetes/oneclick/createAll.bash -n onap -a portal -l oom/kubernetes

# Verify all deployments in portal component are running
kubectl get pod -n onap-portal

4.3. Fix the ONAP aai component

OOM has a hardcoded IP address for the aai service cluster, which might not be valid in your IBM Cloud Private cluster. If you have this issue on your cluster, the onap-aai pods don’t exist on the system at all. To verify, run the following command:

kubectl get pod -n onap-aai

If there is no onap-aai pod, update the hardcoded aaiServiceClusterIp for aai service with a valid IP address in your cluster and redeploy the onap-aai and onap-policy components:

# Delete all the deployments in the aai and policy components
oom/kubernetes/oneclick/deleteAll.bash -n onap -a aai
oom/kubernetes/oneclick/deleteAll.bash -n onap -a policy

# Wait until all aai and policy deployments are deleted
kubectl get pod -n onap-aai
kubectl get pod -n onap-policy

# Find the service cluster ip range
grep service_cluster_ip_range <your-ICP-installation-directory>/cluster/config.yaml

# Get existing service on the cluster
# and choose an IP address that is not used by any existing services for the aai service
kubectl get svc --all-namespaces

# Replace the value of aaiServiceClusterIp with your choice of IP address for aai
vi oom/kubernetes/aai/values.yaml

# Replace the value of aaiServiceClusterIp with the same IP address you set above for policy
vi oom/kubernetes/policy/values.yaml

# Create all the deployments in the aai and policy components
oom/kubernetes/oneclick/createAll.bash -n onap -a aai -l oom/kubernetes
oom/kubernetes/oneclick/createAll.bash -n onap -a policy -l oom/kubernetes

# Verify all deployments in aai and policy components are running
kubectl get pod -n onap-aai
kubectl get pod -n onap-policy

4.4. Understand the issue with the ONAP aaf component

There is a known issue that the ONAP aaf component container cannot be started. The problem is caused by a missing java class file in the container image.

Monitor the reported issue for a future solution.

Summary

After completing the steps in this tutorial, you should have ONAP (the Amsterdam release), except the DCAE and AAF components, running on your IBM Cloud Private cluster.