Edge computing architecture: Building out the edge in the application layer and device layer – IBM Developer

Join the Digital Developer Conference: AIOps & Integration to propel your AI-powered automation skills Register for free

Building out the edge in the application layer and device layer

The first article in this edge computing series described a high-level edge computing architecture that identified the key layers of the edge including the device layer, application layer, network layer, and the cloud edge layer. In this article, we dive deeper into the application and device layers, and describe the tools you need to implement these layers. (The third article in this series will cover the network layer.)

As mentioned in the first article, the cloud edge is the source for workloads for the different edge layers, provides the management layer across the different edge layers, and hosts the applications that need to handle the processing that is just not possible at the other edge nodes due to limitations at these nodes.

The device layer consists of small devices running on the edge. The application layer runs on the local edge and has greater compute power than the device layer. Let’s dive into the details of each of these two layers and the respective components in the layers.

Edge computing use case: Workplace safety on a factory floor

In this article, we will describe how we implemented a workplace safety use case involving the application and device layer of the edge computing architecture.

In a particular factory, when employees enter a designated area, they must be wearing a proper Personal Protective Equipment (PPE) such as a hard hat. A solution is needed to monitor the designated area and issue an alert only when an employee has been detected, entering the area without wearing a hard-hat. Otherwise, no alert is issued. To reduce load on the network, the video starts streaming when a person is detected.

To implement the architecture, the following needs to happen:

  1. Deep learning models need to be trained to identify a person wearing a hard hat. This is accomplished using IBM’s Maximo Visual Inspection.

  2. The models need to be containerized and deployed to the edge cluster. This is accomplished using IBM Edge Application Manager.

  3. A Model needs to be deployed to the smart camera to identify a human which will trigger the video stream. This deployment of model to the smart camera is done using IBM Edge Application Manager.

Here’s an architecture diagram showing these components:

Architecture diagram of edge components for workplace safety use case

Implementing the application layer

The application layer enables you to run applications on the edge. The complexity of the applications that can be run depends on the footprint of the edge server. The edge server, also known as Multi-Access Edge Compute (MEC) server, can be an X86 server or an IBM Power System server that is often run-on premise in an environment such as a retail store, cellular tower, or other location outside of the core network or data center of the enterprise. The sizing of the servers is dependent on the workload that will be run.

Information from the device layer is sent to the application layer for further processing. Some of this information can then be sent to the cloud or other location.

The application layer is likely built on a containers-based infrastructure where common software services and middleware can run. For example, the application layer could be built on Red Hat OpenShift and have one or more IBM Cloud Paks installed on it where deployed containers run.

We will now look at how products such as IBM Maximo Visual Inspection and IBM Edge Application Manager that can be used to create a full end to end solution.

Creating a model using IBM Maximo Visual Inspection

IBM Maximo Visual Inspection is a video and image analysis platform that makes it easy for subject matter experts to train and deploy image classification and object detection models. We will see how to build a hardhat detection model using Maximo Visual Inspection.

  1. Create a set of videos with individuals wearing hardhat. Make sure to include varied scenarios with different lighting conditions.
  2. Log on to Maximo Visual Inspection, and click on Data Sets in the top left corner to create a dataset.
  3. Click on Create a new data set and provide a name like hardhat dataset for the data set.
  4. Import the images or videos that you created in step 1.
  5. To create an object detection model, click Objects in the menu on the left, and click Add Objects to create objects. Create an object called hardhat. If you have different colored hats that you want to recognize, you can create an object for each like Yellow Hardhat and Blue Hardhat.
  6. Click the Label Objects button. For videos in your data set, you can use the Auto Capture button to capture frames at desired time intervals. For each frame, click Box, and choose the hardhat object that you just created, and draw a box around the hardhat. Repeat this step for all frames.

    Screen capture of Maximo Visual Inspection image labelling

  7. In general, the larger the data set, the better the accuracy of the model will be. If you do not have a lot of data, you can use the Augment Data button to create additional images using filters such as flip, blur, rotate, and so on.

  8. Once you are done labeling the images, click Train Model, and select the type of training as Object detection. You can choose from a number of options to optimize your model training and click the Train button. The training time depends on the size of data, type of model, and additional options selected.
  9. Once the model is trained, click the Deploy button. You can name your deployed model and choose to export it. Then, download the exported model as a zip file.
  10. The deployed hardhat model now appears in the Deployed Models tab where you can test the model either using the API endpoint displayed or by clicking the Open button and uploading a video to test if the hardhats are being detected.

Containerizing the model using the Maximo Visual Inspection Edge server

IBM Maximo Visual Inspection edge server is a server that lets you quickly and easily deploy multiple trained models. We will use the edge server to create a Docker image of the hardhat model. This allows you to make the hardhat model available to others, such as customers or collaborators, and also give them the ability to run the model on other systems.

  1. To install the inference server on a machine, download the latest Maximo Visual Inspection Inference software. Navigate to IVI inference folder to install inference server. See the installation documentation for detailed instructions.

     cd visual-insight-infer-x86-1.2.0.0-ppa/
    
     sudo yum install ./visual-insights-inference-1.2.0.0-455.5998b55.x86_64.rpm
    
     /opt/ibm/vision-inference/bin/load_images.sh -f visual-insights-inference-x86_64-containers-1.2.0.0.tar
    
  2. Use the deploy_zip_model.sh script to deploy a model exported from Maximo Visual Inspection on this system.

     /opt/ibm/vision-inference/bin/deploy_zip_model.sh --model model_name --port port_number --gpu GPU_number location_of_exported_IVI_model
    

    For example:

     /opt/ibm/vision-inference/bin/deploy_zip_model.sh--model Hardhatmodel --port 6002 --gpu 0 /root/Hardhatmodel.zip
    

    This command creates a Docker container.

  3. Using the Docker container, create a Docker image. To do so, first obtain the container’s ID and then commit the Docker image:

     docker ps | grep model_name
    

    Copy the container ID from the output, and specify it on this command:

     docker commit <container-id> docker_image_name:tag
    

    For example, for our hardhatmodel, the Docker commit command might look like this:

     docker commit <container-id> hardhatmodel:v1
    
  4. Save the Docker image you created in the above step and zip it to create a .tgz file using the following command:

     docker save docker_image_name:tag | gzip > file_name.tgz
    

    For example:

     docker save hardhatmodel:v1 | gzip > Hardhatmodel.tgz
    
  5. You can now move this .tgz file to the MEC cluster and run a docker load command to load the Docker image onto that edge cluster.

     docker load < hardhatmodel.tgz
    

Implementing the edge clusters

The edge cluster capability of IBM Edge Application Manager (IEAM) helps you manage and deploy workloads from a management hub cluster to remote instances of Red Hat OpenShift Container Platform or other Kubernetes-based clusters. Edge clusters are IEAM edge nodes that are Kubernetes clusters. IEAM deploys edge services to an edge cluster, via a Kubernetes operator, enabling the same autonomous deployment mechanisms used with edge devices.

The hardhat model that you created in the previous section will be deployed to the edge cluster using IEAM. This model requires a GPU for optimal performance. The below instructions provide the steps to install a GPU operator that enables us to use a GPU on an edge cluster.

Set up a GPU operator

To set up GPU support within a Red Hat OpenShift cluster, it is best to install and use GPU Operator. Please note that the MVI models run only on the following GPUs: Tesla T4, Tesla P100, and Tesla V100. You need to complete the following steps before following the steps in the NVIDIA documentation.

  1. Install Red Hat OpenShift. Supported OpenShift versions for the GPU operator are 4.4.29+, 4.5, and 4.6.

  2. Create a Red Hat VM outside the OpenShift cluster, and then entitle it and associate it with a pool. This should generate entitlement files in /etc/pki/entitlement/ folder.

  3. Copy the /etc/rhsm/rhsm.conf, /etc/pki/entitlement/entitlement.pem and /etc/pki/entitlement/entitlement-key.pem to your OpenShift cluster. Files names will be different depending on the entitlement.

  4. Refer to “4.3 Installing GPU Operator via Helm” in the NVIDIA documentation. Follow steps 1 and 2 to create the machineconfig and ensure that the cluster-wide entitlement with the test pod that queries a Red Hat subscription repo for the kernel-devel package is successful. Use the above files to create machineconfig to deploy to the cluster.

  5. After the cluster-wide entitlement is successful, you can follow either of the workflows for installing the GPU operator: either the helm or the OpenShift OperatorHub method.

Register the edge node

Once the GPU operator is installed, the next step is to install and register this edge cluster and an edge device (Mac/TX2) to the IEAM Hub. Registering the edge nodes enables us to deploy workloads like the above hardhat model as edge services on the edge cluster. Use the IEAM documentation to install the edge agent on edge nodes and register it with IEAM hub.

Use a helm chart and helm operator service to deploy a service on the edge cluster

Once you register the OpenShift cluster, you can now create services that can be deployed on the edge cluster. In this scenario, we will use a helm chart and helm operator service for our hardhat MVI model to deploy it on the OpenShift cluster. Using the Docker image that we created earlier for MVI model, we will create a helm chart that can be used to create a helm operator service.

Create a helm chart

  1. Verify helm is installed with:

    helm version

For example: `Client: &version.Version{SemVer:”v2.12.3″, GitCommit:”eecf22f77df5f65c823aacd2dbd30ae6c65f186e”, GitTreeState:”clean”}“

  1. Create Helm Chart Repository:

    helm create mvi-edge-model (mvi-edge-model should be small case letters)

  2. Update Chart.yaml if needed:

    vi mvi-edge-model/Chart.yaml

  3. Update the values.yaml to include your deployment image, tag and ports for your service.

    vi mvi-edge-model/values.yaml

    Ensure that the repository, tag and service ports match your application’s docker image properties.

    For example:

     mvi-edge-model:
         enabled: yes
         service:
             type: clusterIP
             inferencePort: 5001
     image:
         registry: <OCP-private-registry>
         repository: default/hardhatx86aug9
         tag: v1
         pullPolicy: IfNotPresent
    
  4. Modify the deployment.yaml and check service.yaml files in the templates folder according to your application as needed. Ensure that the port number in deployment.yaml matches the above port number of values.yaml.

  5. Test if the helm chart is valid:

    helm template mvi-edge-model

    If the chart is valid without any errors, this command will show you the deployment and service yaml files. It will return errors if there is an error in the helm chart.

Create and publish a helm operator service

Use the following steps to create a helm operator servicce on top of the helm chart you just created. These instructions assume that operator-sdk is installed on a Mac or a device. You can download and install version 0.17.0 of the operator-sdk from the operator framework github.

  1. Create a helm operator with the operator-sdk using the helm chart created above:

    operator-sdk new mvi-edge-operator --type=helm --api-version=mvi-edge.com/v1 --kind=Service --helm-chart=~/mvi-edge-model

    You should see similar results to:

     INFO[0000] Creating new Helm operator 'mvi-edge-operator'.
     INFO[0000] Created helm-charts/mvi-edge-model
     INFO[0000] Generating RBAC rules
     WARN[0000] Using default RBAC rules: failed to generate RBAC rules: failed to get server resources: Get https://kubernetes.docker.internal:6443/api?timeout=32s: EOF
     INFO[0000] Created build/Dockerfile
     INFO[0000] Created watches.yaml
     INFO[0000] Created deploy/service_account.yaml
     INFO[0000] Created deploy/role.yaml
     INFO[0000] Created deploy/role_binding.yaml
     INFO[0000] Created deploy/operator.yaml
     INFO[0000] Created deploy/crds/edge-detector.com_v1_service_cr.yaml
     INFO[0000] Generated CustomResourceDefinition manifests.
     INFO[0000] Project creation complete.
    
  2. Next, create the operator with:

     cd mvi-edge-operator
     operator-sdk build docker.io/ibmgsc/mvi-edge-operator_amd64:1.0.0
    

    Your $DOCKER_IMAGE_BASE (which is ibmgsc in this case) will be different.

    You should see results similar to these results:

    INFO[0000] Building OCI image docker.io/ibmgsc/mvi-edge-operator_amd64:1.0.0
     Sending build context to Docker daemon  31.23kB
     Step 1/3 : FROM quay.io/operator-framework/helm-operator:v0.17.0
     ---> ce3d68592219
     Step 2/3 : COPY watches.yaml ${HOME}/watches.yaml
     ---> 6de4f6f579f4
     Step 3/3 : COPY helm-charts/ ${HOME}/helm-charts/
     ---> 2cbad6fbf986
     Successfully built 2cbad6fbf986
     Successfully tagged ibmgsc/mvi-edge-operator_amd64:1.0.0
     INFO[0002] Operator build complete.
    
  3. Publish the image to the repository with:

    docker push docker.io/ibmgsc/mvi-edge-operator_amd64:1.0.0

    Again, your $DOCKER_IMAGE_BASE (which is ibmgsc in this case) will be different.

  4. Update your image name in operator.yaml with the above Docker image. Edit the REPLACE_IMAGE_NAME with docker.io/ibmgsc/mvi-edge-operator_amd64:1.0.0 from above.

    vi deploy/operator.yaml

    # Replace this with the built image name image: docker.io/ibmgsc/mvi-edge-operator_amd64:1.0.0

(Optional) To Create and test if the files are working on the cluster where you want to deploy this model:

```
kubectl create -f deploy/service_account.yaml -n openhorizon-agent
kubectl create -f deploy/role.yaml -n openhorizon-agent
kubectl create -f deploy/role_binding.yaml -n openhorizon-agent
kubectl create -f deploy/operator.yaml -n openhorizon-agent
kubectl create -f deploy/crds/mvi-edge.com_appservices_crd.yaml -n openhorizon-agent
kubectl create -f deploy/crds/mvi-edge.com_v1_appservice_cr.yaml -n openhorizon-agent
And then kubectl delete commands should also be executed
kubectl delete -f deploy/crds/mvi-edge.com_v1_service_cr.yaml
kubectl delete -f deploy/crds/mvi-edge.com_services_crd.yaml
kubectl delete -f deploy/operator.yaml -n openhorizon-agent
kubectl delete -f deploy/role_binding.yaml -n openhorizon-agent
kubectl delete -f deploy/role.yaml -n openhorizon-agent
kubectl delete -f deploy/service_account.yaml -n openhorizon-agent
```

If the `kubectl delete -f deploy/crds/mvi-edge.com_services_crd.yaml` command hangs, press Command+C or Ctrl+C and run the below command to patch and delete the respective crd:

`oc patch crd/services.mvi-edge.com -p ‘{“metadata”:{“finalizers”:[]}}’ --type=merge`
  1. Create a tar file for operator files:

     cd deploy/
     tar -zvcf mvi-edge-operator.tar.gz .
    

    Make sure that you complete the steps to register your Mac as an edge device to the IEAM hub before proceeding to publishing the edge cluster service. Refer to the IEAM documentation for more details on installing and registering the horizon edge agent on a Mac (edge device).

  2. Create a horizon service:

    hzn dev service new -V 1.0.0 -s mvi-edge -c cluster

  3. Update service definition file. Edit OperatorYamlArchive and add the path for the operator.tar.gz file from the above step.

     vi horizon/service.definition.json
     "clusterDeployment": {
             "operatorYamlArchive": "~/edge-cluster-example/mvi-edge-operator/deploy/mvi-edge-operator.tar.gz"
         }
    
  4. Set environment variables. These commands will set up the necessary environment variables to be able to publish this service to IEAM.

     eval $(hzn util configconv -f horizon/hzn.json)
     export ARCH=$(hzn architecture)
     echo $ARCH
    
  5. Publish the service to IEAM. This command will push the MVI Edge service compatible for clusters to IEAM. Using which patterns or policies can be built to deploy to multiple clusters linked to this specific IEAM Hub.

    hzn exchange service publish -f horizon/service.definition.json

  6. Get the deployment policy files to deploy your new service to your edge node. Change the properties and service values accordingly in the deployment policy json.

    wget https://raw.githubusercontent.com/open-horizon/examples/master/edge/services/helloworld/policy/deployment.policy.json

  7. Publish and view your deployment policy in the Horizon Exchange.

    hzn exchange deployment addpolicy -f deployment.policy.json ${HZN_ORG_ID}/policy-${SERVICE_NAME}_${SERVICE_VERSION}
    hzn exchange deployment listpolicy ${HZN_ORG_ID}/policy-${SERVICE_NAME}_${SERVICE_VERSION}
    

Create a node policy on the edge cluster

After you publish the edge service and policy to the IEAM hub, you can deploy this policy to the edge cluster by creating a node policy that matches the node properties that you defined in the above deployment policy. You need to have already completed the steps to install the GPU operator and register your edge cluster to IEAM hub before completing these steps.

  1. Get the node policy json file and update according to your deployment policy:

    wget https://raw.githubusercontent.com/open-horizon/examples/master/edge/services/helloworld/policy/node.policy.json

  2. Edit the above node.policy.json to set and change it according to your use case.

  3. Register your node policy with this policy:

     hzn register -u $HZN_EXCHANGE_USER_AUTH
     cat node.policy.json | hzn policy update -f-
    
  4. After the above registration is complete, run the hzn policy list to get the node policy properties created on the device.

  5. Once the node policy on the cluster matches the constraints of the deployment policy, the edge cluster will make an agreement with one of the Horizon agreement bots (this typically takes about 15 seconds). Repeatedly query the agreements of this device until the “agreement_finalized_time” and “agreement_execution_start_time” fields are filled in.

    hzn agreement list

  6. Check that the containers are in running state by issuing the oc get pods command after the “agreement_execution_start_time” is filled in.

Implementing the device layer

The edge device layer will contain devices that have compute and storage power and can run containers. These devices can run relatively simple applications to gather information, run analytics, apply AI rules, and even store some data locally to support operations at the edge. The devices could handle analysis and real-time inferencing without involvement of the edge server or the enterprise region.

Devices can be small. Examples include smart thermostats, smart doorbells, home cameras or cameras on automobiles, and augmented reality or virtual reality glasses. Devices can also be large, such as industrial robots, automobiles, smart buildings, and oil platforms. Edge computing analyzes the data at the device source.

The primary product for the device layer is IBM Edge Application Manager. IBM Edge Application Manager provides a new architecture for edge node management. With IBM Edge Application Manager, you can quickly, autonomously, and securely deploy and manage enterprise application workloads at the edge and at massive scale.

On the device layer, any tools or components must be able to manage workloads placed across clusters and the device edge. While many edge devices are capable of running sophisticated workloads such as machine learning, video analytics and IoT services, if the workload is too large for the device layer, the workload should be placed at the application layer. The use of open-source components is key at the device layer, because the portability of our edge solution is key across private, public, and edge clouds.

In our use case, we are using Jetson TX2 as the smart camera. To implement the use case, this edge device needs to be registered to IBM Edge Application Manager.

In this section, we will go through steps involved in installing the Open Horizon agent on our device and registering the device to IBM Edge Application Manager Exchange so that we can deploy models on the device. Once our TX2 device is registered to IBM Edge Application Manager, the object detection model can be deployed which can then help identify human beings in the danger zone and start the stream to the MEC server.

Configure and register your edge device to the IEAM hub using the IEAM documentation.

Register patterns and deploy models to your edge device

Now that the edge device is registered to IBM Edge Application Manager, we can register edge patterns from the exchange server. An edge pattern is a descriptor file that describes which docker images to be downloaded and how they should be run on the device. Registering patterns on the device downloads the associated services and Docker images that are required to run the corresponding models on the edge device. These patterns and services are architecture specific.

  1. Get a list of all the edge patterns on the exchange using the following command:

     hzn exchange pattern list
    
  2. Register a pattern or service from the above list of the patterns that are available on IEAM:

     hzn register -p pattern-SERVICE_NAME-$(hzn architecture)
    

    For example:

     hzn register -p IBM/pattern-ibm-objectdetection
    
  3. Look for the agreement list to see the status of registered services. This agreement status shows the hand-off between the device and exchange server. The creation of the agreements normally is received and accepted in less than a minute. When an agreement is accepted, the corresponding containers can begin running. The Horizon agent must first complete a docker pull operation on each Docker container image. The agent must also verify the cryptographic signature with Horizon exchange. After the container images for the agreement are downloaded and verified, an appropriate Docker network is created for the images. Then, the containers can run. When the containers are running, you can view the container image status by running the docker ps command.

     hzn agreement list
    
  4. Optionally, you can unregister the current running pattern such that you can deploy a different pattern. Unregistering a pattern means stopping the running containers on the edge device and restarting the horizon service to make the device available to accept new patterns. To unregister a pattern:

     hzn unregister -f
    

We have now deployed the object detection model on the devices and now the devices are ready to deploy any further models. With the object detection model deployed on the TX2, whenever the camera detects a person, we can start video streaming to the MEC server.

Summary and next steps

We covered two key components of the edge: the application layer and the device layer. Connectivity to the edge is a key component required to successfully implement the edge. In many cases, the edge will be implemented where connectivity is not available or is not sufficient to meet the low latency requirements for the edge nodes. In such cases, the key network components have to be deployed on the edge.

To implement this edge computing architecture, these are the steps that you followed:

  1. Create MVI model and containerize it.
  2. Install GPU operator on the edge cluster.
  3. Register edge device and edge cluster as edge nodes to IEAM Hub.
  4. Create helm chart and helm operator service on edge device for the MVI model.
  5. Publish the helm operator service to IEAM Hub
  6. Create node policy and deploy the edge operator service to the cluster.
  7. Deploy the object detection pattern to the edge device.

Our next article in this edge computing series dives deeper into the network edge and the tooling that is needed to implement it. This article discusses how the different layers come together using a use case that requires all three layers: application, device, and network.