Building out the edge in the application layer and device layer

The first article in this edge computing series described a high-level edge computing architecture that identified the key layers of the edge including the device layer, application layer, network layer, and the cloud edge layer. In this article, we dive deeper into the application and device layers, and describe the tools you need to implement these layers. (The third article in this series will cover the network layer.)

As mentioned in the first article, the cloud edge is the source for workloads for the different edge layers, provides the management layer across the different edge layers, and hosts the applications that need to handle the processing that is just not possible at the other edge nodes due to limitations at these nodes.

The device layer consists of small devices running on the edge. The application layer runs on the local edge and has greater compute power than the device layer. Let’s dive into the details of each of these two layers and the respective components in the layers.

Edge computing use case: Workplace safety on a factory floor

In this article, we will describe how we implemented a workplace safety use case involving the application and device layer of the edge computing architecture.

In a particular factory, when employees enter a designated area, they must be wearing a proper Personal Protective Equipment (PPE) such as a hard hat. A solution is needed to monitor the designated area and issue an alert only when an employee has been detected, entering the area without wearing a hard-hat. Otherwise, no alert is issued. To reduce load on the network, the video starts streaming when a person is detected.

To implement the architecture, the following needs to happen:

  1. Models need to be trained to identify a person wearing a hard hat. This is accomplished using IBM Maximo Visual Inspector.

  2. The models need to be containerized and deployed to the edge. This is accomplished using IBM Cloud Pak for Multicloud Management.

  3. The models need to be integrated with a video analytics system. The video analytics system needs to be able to manage the video stream, determine if the individual is in a danger zone, then call the hard hat model to determine if an individual is wearing a hard hat and fire an alert accordingly. This is accomplished using IBM Visual analytics.

  4. Models need to be deployed to the camera to identify a human which will trigger the camera to start streaming. This is done using IBM Edge Application Manager.

Here’s an architecture diagram showing these 4 components:

Architecture diagram of edge components for workplace safety use case

Implementing the application layer

The application layer enables you to run applications on the edge. The complexity of the applications that can be run depends on the footprint of the edge server. The edge server can be an X server or an IBM Power System server that is often run on premise in an environment such as a retail store, cellular tower, or other location outside of the core network or data center of the enterprise. The sizing of the servers is dependent on the workload that will be run.

Information from the device layer is sent to the application layer for further processing. Some of this information can then be sent to the cloud or other location.

The application layer is likely built on a containers-based infrastructure where common software services and middleware can run. For example, the application layer could be built on Red Hat OpenShift and have one or more IBM Cloud Paks installed on it where deployed containers run.

We will now look at how products such as Maximo Visual Inspector, Multi Cloud Manager, IBM Video Analytics and IBM Edge Application Manager can be used to create a full end to end solution.

Creating a model using Maximo Visual Inspector

Maximo Visual Inspector is a video and image analysis platform that makes it easy for subject matter experts to train and deploy image classification and object detection models. We will see how to build a hardhat detection model using Maximo Visual Inspector.

  1. Create a set of videos with individuals wearing hardhat. Make sure to include varied scenarios with different lighting conditions.
  2. Log on to Maximo Visual Inspector, and click on Data Sets in the top left corner to create a dataset.
  3. Click on Create a new data set and provide a name like hardhat dataset for the data set.
  4. Import the images or videos that you created in step 1.
  5. To create an object detection model, click Objects in the menu on the left, and click Add Objects to create objects. Create an object called hardhat. If you have different colored hats that you want to recognize, you can create an object for each like Yellow Hardhat and Blue Hardhat.
  6. Click the Label Objects button. For videos in your data set, you can use the Auto Capture button to capture frames at desired time intervals. For each frame, click Box, and choose the hardhat object that you just created, and draw a box around the hardhat. Repeat this step for all frames.

    Screen capture of Maximo Visual Inspector image labelling

  7. In general, the larger the data set, the better the accuracy of the model will be. If you do not have a lot of data, you can use the Augment Data button to create additional images using filters such as flip, blur, rotate, and so on.

  8. Once you are done labeling the images, click Train Model, and select the type of training as Object detection. You can choose from a number of options to optimize your model training and click the Train button. The training time depends on the size of data, type of model, and additional options selected.
  9. Once the model is trained, click the Deploy button. You can name your deployed model and choose to export it. Then, download the exported model as a zip file.
  10. The deployed hardhat model now appears in the Deployed Models tab where you can test the model either using the API endpoint displayed or by clicking the Open button and uploading a video to test if the hardhats are being detected.

Containerizing the model using the Maximo Visual Inspector Inference server

Maximo Visual Inspector Inference server is a server that lets you quickly and easily deploy multiple trained models. We will use the inference server to create a docker image of the hardhat model. This allows you to make the hardhat model available to others, such as customers or collaborators and ability to run the model on other systems.

  1. To install the inference server on a machine, download the latest Maximo Visual Inspector Inference software. Navigate to IVI inference folder to install inference server. See the installation documentation for detailed instructions.

     cd visual-insight-infer-x86-1.2.0.0-ppa/
    
     sudo yum install ./visual-insights-inference-1.2.0.0-455.5998b55.x86_64.rpm
    
     /opt/ibm/vision-inference/bin/load_images.sh -f visual-insights-inference-x86_64-containers-1.2.0.0.tar
    
  2. Use the deploy_zip_model.sh script to deploy a model exported from Maximo Visual Inspector on this system.

     /opt/ibm/vision-inference/bin/deploy_zip_model.sh --model model_name --port port_number --gpu GPU_number location_of_exported_IVI_model
    

    For example:

     /opt/ibm/vision-inference/bin/deploy_zip_model.sh--model Hardhatmodel --port 6002 --gpu 0 /root/Hardhatmodel.zip
    

    This command creates a Docker container.

  3. Using the Docker container, create a Docker image. To do so, first obtain the container’s ID and then commit the Docker image:

     docker ps | grep model_name
    

    Copy the container ID from the output, and specify it on this command:

     docker commit <container-id> docker_image_name:tag
    

    For example, for our hardhatmodel, the Docker commit command might look like this:

     docker commit <container-id> hardhatmodel:v1
    
  4. Save the Docker image you created in the above step and zip it to create a .tgz file using the following command:

     docker save docker_image_name:tag | gzip > file_name.tgz
    

    For example:

     docker save hardhatmodel:v1 | gzip > Hardhatmodel.tgz
    
  5. You can now move this .tgz file to any other system and run a docker load command to load the Docker image onto that system.

     docker load < hardhatmodel.tgz
    

Deploying our model to the edge servers using IBM Cloud Pak for Multicloud Management

The IBM Cloud Pak for Multicloud Management, which runs on Red Hat OpenShift, provides consistent visibility, governance, and automation from on premises to the edge. Using IBM Cloud Pak for Multicloud Management, the operator can have rich views of how clusters operate within the environment. You can use this tutorial on IBM Cloud Garage to learn how to deploy and manage applications across clusters using IBM Cloud Pak for Multicloud Management.

To implement out use case, the hardhat model that you created in the previous section needs to be deployed to the edge servers. In our case, the model is deployed to IBM Cloud Private. The previously created hardhat model (in the .tgz file) is loaded on IBM Cloud Pak for Multicloud Management, and then can be deployed to multiple clusters using helm charts.

In the following steps, we will go through the process of deploying these Docker images to IBM Cloud Private using the helm charts.

Deploying the model from IBM Cloud Pak for Multicloud Management

  1. Login to IBM Cloud Pak for Multicloud Management, and ssh into the system.

     ssh <user>@<mcm-ip-address>
    
  2. Add the Docker image to the IBM Cloud Pak for Multicloud Management Private repository:

    docker login <mcm-docker-repo>
    docker load < hardhat.tgz
    docker tag <img>:<tag> <mcm-docker-repo>/default/<img>:<tag>
    docker push ibm.cloud:8500/default/<img>:<tag>
    

    Note: hardhat.tgz is the .tgz you create in the previous section. Make sure the file is transferred to IBM Cloud Pak for Multicloud Management.

  3. Add image policies on the target cluster, which in our case is IBM Cloud Private. Log in to the target cluster’s IBM Cloud Private, and navigate to Manage > Resource Security > Image Policies > Add Image Policy. Then, add a name and a scope as cluster, and then add the registry as IBM Cloud Pak for Multicloud Management Private repo <mcm-docker-repo>

  4. Add the private repo and the ca.crt file on the target cluster’s file system.

    SSH into the target cluster:

     ssh <user>@<icp-ip-address>
    

    On the target cluster, create a directory for the private repo in the certs.d folder:

     mkdir /etc/docker/certs.d/<mcm-docker-repo>
    

    Copy ca.crt from the hub cluster to the target cluster. On the local machine run this command:

     scp <mcm-user>@<mcm-ip-address>:/etc/docker/certs.d/<mcm-docker-repo>/ca.crt <icp-user>@<icp-ip-address>:/etc/docker/certs.d/<mcm-docker-repo>/
    
  5. Run the following command in both the target cluster and the hub cluster to create the pull secret that is then used in the deployment.yaml file of the helm chart:

     kubectl create secret docker-registry <secret-name>--docker-server=<mcm-docker-repo> --docker-username=<username> --docker-password=<password>
    
  6. Add the IBM Cloud Pak for Multicloud Management IP address to the IBM Cloud Private hosts file:

     vi /etc/hosts
    

    Add a line like this with the IP address and host name: <mcm-ip-address> <mcm-hostname>

Creating and publishing the helm chart

  1. Create a Helm Chart Repository using the following command. Use all lowercase letters for its name. This command automatically generates sample yaml files including chart.yaml, values.yaml, service.yaml, and deployment.yaml.

     helm create my-app
    
  2. Edit the chart.yaml file to specify the custom name and version (as you can see in the screen shot below).

  3. Edit the values.yaml file to update the Docker image and node port information (as you can see in the screen shot below).

  4. Edit the deployment.yaml file in the templates folder to add any additional parameters like GPUs in the resources section of yaml file.

The following screen shot shows all four .yaml files that were created for our hardhat scenario.

YAML files for the hardhat scenario

Now, you need to package and publish the helm chart.

  1. Display the property values set for the helm chart by using the helm template command:

     helm template my-app
    

    Change my-app to be whatever you used for your helm chart repository name.

  2. Package your helm chart into a .tgz file.

     helm package my-app
    
  3. Create a public GitHub repo and clone it to your local folder.

     git clone <GitHub URL>
    
  4. Create an empty index.yaml file and push it to the repo:

    touch index.yaml
    git add index.yaml
    git commit -a -m “add index.yaml”
    git push
    
  5. Add the helm chart to the GitHub repo and edit the index.yaml file.

     helm repo index helm-example/ --URL “GitHub URL”
    

    Your GitHub repo now has the helm package (.tgz file) and ththe index.yaml file.

  6. Add the helm repository to IBM Cloud Pak for Multicloud Management. In your browser for IBM Cloud Pak for Multicloud Management, navigate to Manage > Helm Repositories > Add Repository > .

  7. Publish the helm chart from IBM Cloud Pak for Multicloud Management to IBM Cloud Private. Navigate to the Catalog, search for and click on your chart name. Then, click Configure and select the IBM Cloud Private that is linked to your IBM Cloud Pak for Multicloud Management. Finally, Navigate to Workloads > helm release section to find your release.

Now that we trained a model and deployed it to the edge server, you can now use that model to recognize hard hats.

Use the trained model to recognize hard hats using IBM Video Analytics

Video data can be processed at the edge, either at the application layer or the device layer. Processing video data at the edge can help reduce latency, lower bandwidth consumption, and enable the user to make faster and informed decisions.

IBM Video Analytics is used to manage the video stream from a camera. It is also used to define an object to detect as well as the area to designate as a danger zone. Once it detects a person entering the danger zone area, it makes a call to the Maximo Visual Inspector hard hat model to determine whether that individual is wearing a hard hat. If a person is not wearing a hard hat, IBM Video Analytics fires an alert.

You’ll need to install and configure these key components of IBM Video Analytics:

  • Metadata Ingestion, Lookup, and Signaling
  • Semantic Streams Engine
  • Deep Learning Engine

These components can be set up to run at the application layer on a single server.

  1. Configure a channel. Set up a camera view where a danger zone can be defined and a person can be detected when entering the defined area.

    Screen capture of channel configuration

  2. Configure an analytics profile. Set up a new or update an existing AnalyticProfile for tracking whether a person is wearing a hard hat. The following example illustrates a HardHat Tracking profile that will process analytics result from Maximo Visual Inspector, dump the result image in the specified directory, and trigger a tripwire alert if no white or blue hard hat were found.

    Screen capture of analytics profile

  3. Configure your alerts. Set up at least one type of alert, such as a tripwire or a region alert to define the danger zone area. The figure below shows sample screens for a HardHat Tracking analytic profile being registered and assigned and how a tripwire alert can be configured to define an area of interest.

    Screen capture of configuring alerts

  4. Configure the Deep Learning Engine in IBM Video Analytics to call the deployed model in Maximo Visual Inspector. This can be useful if you are running this engine on a system without a GPU and you have installed Maximo Visual Inspector on a separate system with a GPU. The Deep Learning Engine in IBM Video Analytics can run local models and remote Maximo Visual Inspector models. In order to use any model in IBM Visual Analytics, the model must be configured in the Deep Learning Engine configuration files that include docker compose YAML, nginx, and JSON for each model as shown in the figure below. For more information, see the IBM Video Analytics documentation on Managing Models in the Deep Learning Engine.

    Screen capture of configuring models for deep learning engine

  5. When you are done configuring the components, restart IBM Video Analytics. Then, use the command line interface to verify that the Deep Learning Engine can call Maximo Visual Inspector successfully. For example, substitute the image file name and URL with your set up to run the following commands.

    Verify a direct call to Maximo Visual Inspector running on a server, for example svrX, port 6005:

     curl -F "imagefile=@testhardhat.jpg" http://<svrX>:6005/inference
    

    Verify a Deep Learning Engine call to Maximo Visual Inspector:

     curl -F "image=@testhardhat.jpg" http://localhost:14001/detect/hardhat
    

Implementing the device layer

The edge device layer will contain devices that have compute and storage power and can run containers. These devices can run relatively simple applications to gather information, run analytics, apply AI rules, and even store some data locally to support operations at the edge. The devices could handle analysis and real-time inferencing without involvement of the edge server or the enterprise region.

Devices can be small. Examples include smart thermostats, smart doorbells, home cameras or cameras on automobiles, and augmented reality or virtual reality glasses. Devices can also be large, such as industrial robots, automobiles, smart buildings, and oil platforms. Edge computing analyzes the data at the device source.

The primary product for the device layer is IBM Edge Application Manager. IBM Edge Application Manager provides a new architecture for edge node management. With IBM Edge Application Manager, you can quickly, autonomously, and securely deploy and manage enterprise application workloads at the edge and at massive scale.

On the device layer, any tools or components must be able to manage workloads placed across clusters and the device edge. While many edge devices are capable of running sophisticated workloads such as machine learning, video analytics and IoT services, if the workload is too large for the device layer, the workload should be placed at the application layer. The use of open-source components is key at the device layer, because the portability of our edge solution is key across private, public, and edge clouds.

In our use case, we are using Jetson TX2 as the smart camera. To implement the use case, this edge device needs to be registered to IBM Edge Application Manager.

In this section, we will go through steps involved in installing the Open Horizon agent on our device and registering the device to IBM Edge Application Manager Exchange so that we can deploy models on the device. Once our TX2 device is registered to IBM Edge Application Manager, the object detection YOLO model can be deployed which can then help identify human beings in the danger zone and start the stream to the server.

Configure your edge device

  1. Log in to the device, and run the following command to switch to a user that has root privileges:

     sudo -s
    
  2. Verify that your Docker version is 18.06.01-ce or later. Some Linux distributions can be set up to run older Docker versions. Run the docker –version command to check your installed Docker version. If necessary, update to the current version of Docker by running the following commands:

    curl -fsSL get.docker.com | sh
    

    Run the docker version command again:

    docker --version
    

    You should see output similar to this:

    Docker version 18.06.1-ce, build e68fc7a

  3. Install the Open Horizon agent on the device. Copy the following three relevant Horizon Debian packages for your operating system and architecture: horizon, horizon-cli, and bluehorizon from the server where IBM Edge Application Manager is installed to your device. These packages are in the ibm-edge-computing-x86_64-<VERSION>.tar.gz release file. After you’ve installed IBM Edge Application Manager on the server, these required packages are located in the following directory: /ibm-edge-computing-x86_64-<VERSION>/horizon-edge-packages/linux/<OS>/<ARCH>/. Install the copied Horizon Debian packages by running the one of the following commands (which show our TX2 device):

    dpkg -i *horizon*.deb
    apt install ./*horizon*.deb
    

Register the device to IBM Edge Application Manager

  1. Stop the agent.

     systemctl stop horizon.service
    
  2. Point your edge device horizon agent to IBM Edge Application Manager by creating or editing /etc/default/horizon with this content (substituting the value for $ICP_URL that you used above):

     vi /etc/default/horizon
    
  3. Edit the following values with their respective values:

     HZN_EXCHANGE_URL=$ICP_URL/ec-exchange/v1
     HZN_FSS_CSSURL=$ICP_URL/ec-css/
    
  4. Install icp.crt

     sudo cp icp.crt /usr/local/share/ca-certificates && sudo update-ca-certificates
    
  5. Restart the agent by running the following command:

     systemctl restart horizon.service
    
  6. Verify the agent is running and properly configured by issuing these commands:

     hzn version
     hzn exchange version
     hzn node list
    
  7. To create an api key:

     cloudctl login <ICP_URL>
     cloudctl iam api-key-create iamapikey
    
  8. Set these environment variables. Copy the API key that is generated after running the above command:

     export ICP_URL=<ICP_URL>'
     export HZN_ORG_ID=IBM
     export HZN_EXCHANGE_USER_AUTH='<apikey-name>:<apikey-value>'
    
  9. Confirm the node with the IBM Edge Application Manager. Verify that the environment variables are set correctly.

     hzn exchange user list
    
  10. View the list of sample edge service deployment patterns by using either of these commands:

    hzn exchange pattern list
    

    Or, this one:

    hzn exchange pattern list HZN_ORG_ID
    
  11. At this point your edge device is linked to IBM Edge Application Manager. Run the following commands to register your device to IBM Edge Application Manager to register the services, patterns, and policies. Create a unique node ID and token for each device in HZN_EXCHANGE_NODE_AUTH.

    export HZN_EXCHANGE_NODE_AUTH="gsctx2nov27:gsctx2tokennov27"
    hzn exchange node create -n $HZN_EXCHANGE_NODE_AUTH
    hzn exchange node confirm
    

Register patterns and deploy models to your edge device

Now that the edge device is registered to IBM Edge Application Manager, we can register edge patterns from the exchange server. An edge pattern is a descriptor file that describes which docker images to be downloaded and how they should be run on the device. Registering patterns on the device downloads the associated services and docker images that are required to run the corresponding models on the edge device. These patterns and services are architecture specific.

  1. Get a list of all the edge patterns on the exchange using the following command:

     hzn exchange pattern list
    
  2. Register a pattern or service from the above list of the patterns that are available on IBM Edge Application Manager:

     hzn register -p pattern-SERVICE_NAME-$(hzn architecture)
    

    For example:

     hzn register -p IBM/pattern-ibm.yolo
    
  3. Look for the agreement list to see the status of registered services. This agreement status shows the hand-off between the device and exchange server. The creation of the agreements normally is received and accepted in less than a minute. When an agreement is accepted, the corresponding containers can begin running. The Horizon agent must first complete a docker pull operation on each Docker container image. The agent must also verify the cryptographic signature with Horizon exchange. After the container images for the agreement are downloaded and verified, an appropriate Docker network is created for the images. Then, the containers can run. When the containers are running, you can view the container image status by running the docker ps command.

     hzn agreement list
    
  4. Optionally, you can unregister the current running pattern such that you can deploy a different pattern. Unregistering a pattern means stopping the running containers on the edge device and restarting the horizon service to make the device available to accept new patterns. To unregister a pattern:

     hzn unregister -f
    

We have now deployed the object detection (YOLO) model on the devices and now the devices are ready to deploy any further models. With the YOLO model deployed on the TX2, whenever the camera detects a person, we can start video streaming to the server.

Summary and next steps

We covered two key components of the edge: the application layer and the device layer. Connectivity to the edge is a key component required to successfully implement the edge. In many cases, the edge will be implemented where connectivity is not available or is not sufficient to meet the low latency requirements for the edge nodes. In such cases, the key network components have to be deployed on the edge.

Our next article in this edge computing series dives deeper into the network edge and the tooling that is needed to implement it. This article discusses how the different layers come together using a use case that requires all three layers: application, device, and network.