Win $20,000. Help build the future of education. Answer the call. Learn more

Building and deploying a network service in a 5G network for your edge applications to run on

In this edge computing series, we’ve explored edge computing architectures and use cases to help enterprises understand how they might benefit from the emerging technologies of edge computing and 5G. Because 5G is core to the businesses of connectivity, telecommunications companies are investing heavily in edge computing as a key pillar for their overall 5G rollout.In this edge computing series, we’ve explored edge computing architectures and use cases to help enterprises understand how they might benefit from the emerging technologies of edge computing and 5G. Because 5G is core to the businesses of connectivity, telecommunications companies are investing heavily in edge computing as a key pillar for their overall 5G rollout.

In Part 1, we showed how edge computing is relevant to the challenges faced by many industries, but especially the telecommunications industry. We also discussed the three key layers of an edge computing architecture: the device edge, local edge (which includes the application layer and application layer), and cloud edge.

In Part 2, we explored the application layer and device layer in greater detail and discussed the tools needed to implement the two layers. Containerized applications are deployed and managed on this layer. However, the underlying network layer needs to be available to run these applications at the edge.

This third article will cover the network layer. It is essential that you consider the network layer when you create an edge solution. You especially need to understand how the network layer integrates with the application layer.

In this final part in our series, we will discuss the underlying components of the network layer and how you can orchestrate, manage, and monitor the network components. Finally, we will show you how all the different edge computing layers come together to provide an integrated edge solution.

Understanding the network layer of an edge computing architecture

The network layer includes the network components, such as routers and switches, that are needed to run the local edge. The network layer is mostly virtualized or containerized. The network layer manages the network at the edge.

The network layer is virtualized because managing physical network devices at the edge is a very complex task. Some of the key components of the network layer include:

  • Network function (xNFs) components which can be Virtualized (vNF) or Containerized (cNF)
  • Virtualized infrastructure manager (VIM), which is the infrastructure on which the xNFs will run
  • Management and orchestration (MANO) and monitoring components, which are used on the xNFs deployed on the network
  • Continuous integration and continuous deployment (CI/CD) pipeline, which manage xNFs on the VIM using the MANO components

Creating a 5G Network Slice

We will describe how we built and deployed a network service running on the network layer. We will provide a high-level overview of how a network slice can be created on one of the xNFs that will be deployed in a 5G network. Please note that the full creation of a slice is quite complex and a small portion of creating one is provided for illustrative purpose of a service that can run at the edge.

A key feature of 5G technology is the ability to create network slices that run multiple logical networks as virtually independent operations over shared physical infrastructure. Network slices offer operators the flexibility to allocate speed, capacity, and coverage in logical slices according to the demands of each use case by balancing the disparate requirements such as availability/reliability, bandwidth, connectivity, cost, elasticity, and latency.

In our use case a 5G Network Slice is provisioned to provide low latency and high availability network service at the edge required for the worker safety applications to perform well.

The following figure shows how network slices are dedicated to different kinds of edge applications.

how network slices are dedicated to different kinds of edge applications

To implement a network slice, at a high level, you need to do these things:

  1. Identify the network function components (xNFs) that will form the building blocks of your 5G network
  2. Set up the network function virtualized infrastructure (NFVi) with infrastructure managers (VIMs).
  3. Create a CI/CD pipeline with tools like Jenkins and Gogs to manage onboarding and testing of xNFs and network services.
  4. Onboard xNFs on an orchestrator platform like IBM Agile Lifecycle Manager, and create network service designs needed for the 5G network slice.
  5. Test the deployment of your network slice service using your orchestrator (IBM Agile Lifecycle Manager) to target the VIM environment.
  6. Configure your operations platform using tools like IBM Netcool Operations Insights and IBM Netcool Agile Service Manager by creating observer jobs to monitor network events and the network topology.

Step 1: Identify your network function components (xNFs)

The key components of the network layer are xNFs. xNFs are either vNFs (virtual network functions) or cNFs (containerized network functions). Most current xNFs are virtualized, but future xNFs will primarily be containerized. It is therefore essential that the edge network layer supports both until there are sufficient cNFs to run the network. Examples of xNFs include firewalls, routers and gateways. The xNFs form the building blocks of your 5G network.

The key xNFs that are used in a 5G network include:

  • Network Core: Evolved Packet Core and 5G Core. Metaswitch Fusion Core is a cloud-native function that provides complete 5G Core and Service Based Architecture functions with 4G-5G interworking. Metaswitch Fusion Core comprises four key 5G technical areas: The user plane, control plane, service-based architecture, and management.
  • Radio Access Network. Altiostar virtual RAN follows Open-RAN architecture. Radio Network management is split as Central Unit (CU) and Distributed Unit (DU) are deployed as VNFs on Openstack.
  • IP Multimedia Subsystem Core (IMS). Metaswitch Clearwater IMS Clearwater Core is deployed as a Virtualized Network Function (vNF) and for deployments using OpenStack. Clearwater Core comprises of the key IMS elements necessary to make a call:

    • Interrogating Call Session Control Function (I-CSCF)
    • Serving Call Session Control Function (S-CSCF)
    • Breakout Gateway Control Function (BGCF), along with an offline charging trigger function (CTF) and a Home Subscriber System (HSS) Interface.
  • Transport layer. Juniper’s virtual SRX (vSRX) is be deployed as a virtual machine at the edge. It provides switching, routing, and firewall security in a more scalable fashion to provide secure protection across private, public, and hybrid clouds. This creates the transport network to connect the 5G network components in our deployment.

All the above xNFs are needed to create a network slice. The high-level flow through the cellular network is:

  1. The data comes through the vRAN.
  2. The data is sent to the transport layer to the 5G core.
  3. The data is then sent to the IMS for transmission to the end point.

We will use this flow to illustrate how a network service can be created in the network layer and how the services will be chained. However, before we do that, we need to determine the VIM on which these xNFs should run on.

Step 2: Set up the Network Function Virtualization Infrastructure (NFVi) and Virtual Infrastructure Manager (VIM)

Virtual infrastructure managers (VIMs) like VMware vCenter and OpenStack enable users to deploy virtual machines (VMs), size them, put them in certain network topologies, and more. In VMware vCenter, the hosts where the VMs are running have an operating system called ESXi. This kind of operating system that runs on a bare metal host is called a hypervisor (type 1). OpenStack manages hosts that are using a KVM hypervisor which is not an operating system in itself, but an additional capability of Linux.

While we won’t delve too deeply into the intricacies of containers and their various engines (Docker, cri-o, rkt, and so on), it is worth mentioning that containers generally run as single components that can be put together to form bigger applications. The process of making containers interact with each other in a favorable manner can be viewed as container orchestration, which is where Kubernetes comes in. Kubernetes allows for easy management of containers across clusters which can span multiple physical or virtual machines. To learn more about containers and building containerized applications, see the getting started guide on IBM Developer.

OpenShift takes the base Kubernetes and extends it to the enterprise level by adding security and DevOps. Users are given restricted access to certain namespaces or projects based on cluster roles and other controls. Jenkins can be used to create applications directly from Github to a local OpenShift cluster in just a few clicks. These services can probably be added on base Kubernetes, but it is helpful that OpenShift provides them and more by default.

In our use case, we are using OpenShift and OpenStack as the VIM to manage the network layer and deploy components of our 5G network slice on the 5G core xNF.

Step 3: Create a CI/CD pipeline

In this step, you need to create a CI/CD pipeline with tools like Jenkins and Gogs to manage the onboarding and testing of xNFs and network services.

Let’s dig into how we can use the CI/CD hub to process and automate the DevOps tasks for our 5G Core network component. CI/CD is a process/practice that is used to quickly and safely push your development cycle updates. With the CI/CD hub, we can create pipelines in the Jenkins component, so that every time an update is pushed, it can trigger a set of tasks that can help ensure that our changes are correctly packaged and deployed. This can be anything from how to package an xNF to how it should be tested, using behavior tests, before it is deployed. The CI/CD pipeline can also monitor and change xNFs, as well as report and resolve issues that are discovered.

These tools are in the CI/CD Hub:

  • Gogs: Lightweight self-hosted Git service
  • Nexus: Artifact repository manager
  • Jenkins: Automation server that enables the CICD process
  • Openldap: Open-source implementation of the Lightweight Directory Access Protocol
  • Docker Registry: Registry for hosting docker images
  • Nginx Ingress: Ingress controller to support accessing some services with Ingress

After building the CI/CD hub, we use it to onboard each of the xNFs in our network layer. In this article, we’ll step through how to do this for the 5G Core xNF.

We start off with a network engineer that loads the software. With the CI/CD DevOps tooling available in the CI/CD Hub, the engineer can package, test, and finally publish the xNF to be available on an xNF Catalog.

From here, a service designer can take over. The service designer can use the xNF catalog to create a service by using the CI/CD hub. Again, the designer will use these tools to package, test, and finally publish the new service onto the service catalog.

Now, let’s look at the process on a more technical level. It is Day 0 in the development process, and the developers are still working on packaging the xNF component. The developers are constantly working on updating the package so that it can be deployed and available on the catalog so that the designer can create a service.

Developers push the updates to the 5G Core package to the GoGs repo.

Screen capture of GoGs repository for 5G Core assembly package

As the developers push the updates to the 5G Core package to the GoGs repo, this will then trigger a webhook to set off the Jenkins pipeline that was created.

Screen capture of Jenkins pipeline view for 5G Core

From here, Jenkins will do some testing and perform all the tasks that are defined in our pipeline. For example, one of the key features of this is loading the assembly into our IBM Agile Lifecycle manager and doing a behavior test to ensure that it still works and can be used within our catalog. From there, it is then available for our lifecycle manager so that the designer can package it as a service.

Step 4: Onboarding and managing the xNF components that are needed for the network slice

To onboard and manage the xNF components, we use the following MANO and operations products:

  • IBM Agile Lifecycle Manager enables automated operations by managing the end-to-end lifecycle of virtual network services, from release management of third party xNF software packages right through to the continuous orchestration or running of vNF and Service instances. IBM Agile Lifecycle Manager is used to manage network orchestration across data centers.

  • IBM Netcool Operations provides a consolidated view of events across local, cloud, and hybrid environments and delivers actionable insight into the performance of services and their associated dynamic network and IT infrastructures.

  • IBM Netcool Agile Service Manager provides visualization of complex network topologies in real-time, updated dynamically or on-demand, allowing further investigation of events, incidents and performance.

We identified xNFs that are needed to build the 5G network slice service, and we have a CI/CD pipeline in place, and now we need to complete the following two steps before the network slice service can be finalized and added to a service catalog.

  • Onboarding all xNFs onto our MANO platform, designing the network services using those xNFs.

  • Deployment of those services onto OpenStack and OpenShift.

Example of onboarding and deploying an xNF using IBM Agile Lifecycle Manager

To onboard our xNF components, you need to wrap the xNF software components and push them to Agile Lifecycle Manager’s resource repository. This step wraps third party xNF software into agile service building blocks that can be tested individually for performance and to reduce errors that need manual intervention in production. We used Ansible resource manager for automation, so we create Ansible playbooks to create our xNF packages.

To create the Ansible playbook we start with creating a descriptor file (see part 1 in the following figure). Descriptor files define the input properties of an xNF service and the list of lifecycle. Then, we create scripts for lifecycle events such as Install, Start, Stop, Configure and Integrity check. These scripts are placed inside the lifecycle folder. Roles are used as building blocks for these lifecycle events. The example below (part 2 in the following figure) show the Install lifecycle script of 5G Core that uses the createinstance role. The scripts for roles go in roles folder and can have multiple tasks defined in them. The example of the createinstance role below (part 3 in the following figure) contains the two task defined to log in to Red Hat OpenShift and install the helm release of 5G Core.

Key scripts inside an Ansible playbook

After the xNF package is complete, it can be onboarded to IBM Agile Lifecycle Manager using its command line tool lmctl by specifying the target resource manager instance. For example: lmctl project push <target_alm> --armname <resource_mngr>.

Now, the xNF should be onboarded and available as a resource on IBM Agile Lifecycle Manager. The following figure shows a logical view of the properties are governed by assembly descriptor once onboarded.

Logical view of properties

After xNFs software packages are tested and onboarded as available packages, the next step is to create service designs using one or more xNFs. These xNFs can be taken from multiple resource managers, chained together by defining interdependent properties, and configured to be deployed across multiple data centers. Service design also includes definitions of relationships between xNFs and their intent-based lifecycle events.

The IBM Agile Lifecycle Manager service designer can be used to chain multiple xNFs to create a service. To create a network slice service you need to chain the 5G core, IMS, and Juniper xNF assemblies built on the xNFs in Step 1.

Network slice service with chained 5G core, IMS, and xNF assemblies

Step 5: Test the deployment of xNF components to OpenStack and OpenShift to provision the network slice

Now, to test the deployment of the network slice, we will use IBM Agile Lifecycle Manager as the MANO engine. We will configure OpenStack and Red Hat OpenShift as data center locations where our xNFs can be deployed. Both OpenStack and OpenShift provide VIM capabilities to manage the virtual infrastructure where vNFs (to OpenStack) and cNFs (to OpenShift) can be deployed.

A deployment location is defined and configured using Ansible resource manager’s APIs. An example of location properties for an OpenStack tenant:

location properties

These deployment locations can be used as a parameter in a service design to define where particular xNFs will be deployed. The following screenshot highlights the deployment location input configuration in the service design for each xNF component of our 5G Slice service: 5G Core, vRAN, IMS, and Juniper xNF.

Network slice service with chained 5G core, IMS, and deployment locations highlighted

The following screenshot shows an example of deployment location input while provisioning a new 5G network slice service.

Deployment location for 5g network slice service

Step 6: Configure your operations platform

The last step in creating a 5G network slice is to create observers, use the event viewer to monitor and manage events generated by 5G core, and finally look at the topology view of the 5G Slice using IBM Netcool Operations Insights and IBM Netcool Agile Service Manager.

An observer is a service that extracts resource information and inserts it into the IBM Agile Service Manager database. Before observers can load data, you must first define and then run observer jobs. In our use case, we are using observers to monitor events from IBM Agile Lifecycle Manager, Red Hat OpenShift, and OpenStack. For the purpose of this article, the steps below show how to configure observer jobs using the Observer Configuration UI for IBM Agile Lifecycle Manager that we are using to onboard and provision the 5G Core xNF.

  1. Log on to the DASH web application of IBM Netcool Agile Service Manager using your user credentials.
  2. Click the Administration drop-down menu.
  3. Under the Agile Service Management section, click Observer Jobs. This will display the Observer Configuration UI which displays the existing observer jobs.
  4. Click Add a new job to display all jobs that can be configured. A number of options are available including IBM Agile Lifecycle Manager, OpenStack, and Kubernetes instances as shown below.

Defining observer jobs in IBM Netcool Agile Service Manager

Depending on the type, each observer job requires slightly different parameters to be filled out. In our example, to configure an observer job, you will need to provide a Unique ID for the job, IBM Agile Lifecycle Manager instance name to identify the IBM Agile Lifecycle Manager, Topic (Kafka topic), Group ID (Kafka group ID), and connection details such as Kafka host and port to be used.

Defining observer jobs in IBM Netcool Agile Service Manager

Monitoring events in the Event Viewer

After the observer jobs have been configured and run, events will be populated in the Event Viewer. The following figure shows the events for the 5G Core xNF component. You can use the event viewer to monitor and manage events through an interactive interface. Information about alerts is displayed in the event list according to filters and views.

The summary toolbar contains color-coded severity indicator icons, one for each defined severity level. Next to each icon is a number that indicates the number of events with that severity. The events area contains a table of events and their characteristics. Each row contains the characteristics of a single event.

Event viewer

Viewing the 5G network slice topology in the Topology Viewer

IBM Netcool Agile Service Manager is used to visualize topology data. To open the topology viewer, under the Agile Service Management subheading, from the Incident drop-down menu, choose Topology Viewer.

To view a topology, you need to define a seed resource on which to base your view. Then, choose the levels of networked resources around the seed that you wish to display, and click the Render button to render the view. You can then further expand or analyze the displayed topology in real time or compare it to previous versions within a historical time window. The following figure shows the topology view of our 5G Slice Network service.

Event viewer

Integrating all the layers of our edge computing architecture

We have discussed 3 edge layers in this series. The device layer has devices which can run small programs and transmit the required data to the application layer. The application layer has greater compute resources and is therefore able to perform further analysis and compute on the data provided. In some cases, the application layer may need to interact with systems in the cloud or data center. All these different layers communicate through the network layer.

In our worker safety use case, the models that are used to recognize an object were deployed on the device layer. When an object is detected, the video stream is sent to the application layer for further analysis. Transmission of the stream is through the network layer. The network traffic can vary as multiple video streams can be flowing though the network and devices keep getting added and removed. Some of the video stream is not as critical since they show how the assembly parts are moving across the conveyor belt, but anything related to worker safety is critical and alert need to be raised immediately. All of this is managed by the network layer.

The worker safety applications deployed to the edge devices will use the network layer available at the edge. Now let’s assume network performance deteriorates as the video workload increases in the factory due to increase components being manufactured. In addition, new devices are added to the network to handle these workloads. This negatively impacts the video analytics for the worker safety application.

The solution for this is to create a network slice for the worker safety application. You create a network slice and all devices that are needed to the monitor safety are dedicated to this slice. This means that these devices will get much higher throughput ensuring that safety is maintained in the factory. The other systems will see a deterioration in performance but they are not critical so there will be no or minimal impact.

To summarize, the full end-to-end implementation of an edge use case will involve the following:

  • Deployment and management of the application layer, as described in Part 2 of this series. The application can be deployed to the application layer including servers or to the device layer such as a cameras.

  • Deployment and management of the network layer, as described in this article.

  • Running of the application layer, which will use the network layer for network functions. In our example, the models running on the camera detect an object of interest. The result is the video is streamed to the application layer for further analysis using the network layer. If performance deteriorates, then a slice can be created for specific devices on the network using the tooling we discussed so that they get the required network bandwidth resulting in improved performance.


In this edge computing series, we provided a high-level overview of edge. Edge computing will be a critical part of any enterprise in the future. Enterprises will have a continuing need for high-speed computing as business needs dictate the use of edge computing devices that need to respond rapidly to changes in the environment, data, and business processes.

The advent of 5G and the ability to run containerized applications at various edge nodes makes edge computing a reality. To ensure the success of an edge solution it is critical that the business case for the solution is clearly understood including the benefits and ROI from implementing the use cases. The technical aspects should also be considered carefully. The architecture needs to be carefully created and must consider the different edge nodes, the network layer, the application layers, and the cloud/data center. When looking at the other layers, one needs to consider the form factor of the different nodes, the type of applications that should be deployed, and the orchestration, monitoring, and managing of these layers.

Implementing edge computing clearly involves much more than what is provided in the articles in this series, but we hope the articles help you get started or progress in rolling out edge computing solutions in your enterprise.