The first article in this edge computing series described a high-level edge computing architecture that identified the key layers of the edge including the device layer, application layer, network layer, and the cloud edge layer. In this article, we dive deeper into the application and device layers, and describe the tools you need to implement these layers. (The third article in this series will cover the network layer.)
As mentioned in the first article, the cloud edge is the source for workloads for the different edge layers, provides the management layer across the different edge layers, and hosts the applications that need to handle the processing that is just not possible at the other edge nodes due to limitations at these nodes.
The device layer consists of small devices running on the edge. The application layer runs on the local edge and has greater compute power than the device layer. Let’s dive into the details of each of these two layers and the respective components in the layers.
Edge computing use case: Workplace safety on a factory floor
In this article, we will describe how we implemented a workplace safety use case involving the application and device layer of the edge computing architecture.
In a particular factory, when employees enter a designated area, they must be wearing a proper Personal Protective Equipment (PPE) such as a hard hat. A solution is needed to monitor the designated area and issue an alert only when an employee has been detected, entering the area without wearing a hard-hat. Otherwise, no alert is issued. To reduce load on the network, the video starts streaming when a person is detected.
To implement the architecture, the following needs to happen:
Models need to be trained to identify a person wearing a hard hat. This is accomplished using IBM Maximo Visual Inspection.
The models need to be containerized and deployed to the edge. This is accomplished using IBM Cloud Pak for Multicloud Management.
The models need to be integrated with a video analytics system. The video analytics system needs to be able to manage the video stream, determine if the individual is in a danger zone, then call the hard hat model to determine if an individual is wearing a hard hat and fire an alert accordingly. This is accomplished using IBM Visual analytics.
Models need to be deployed to the camera to identify a human which will trigger the camera to start streaming. This is done using IBM Edge Application Manager.
Here’s an architecture diagram showing these 4 components:
Implementing the application layer
The application layer enables you to run applications on the edge. The complexity of the applications that can be run depends on the footprint of the edge server. The edge server can be an X server or an IBM Power System server that is often run on premise in an environment such as a retail store, cellular tower, or other location outside of the core network or data center of the enterprise. The sizing of the servers is dependent on the workload that will be run.
Information from the device layer is sent to the application layer for further processing. Some of this information can then be sent to the cloud or other location.
The application layer is likely built on a containers-based infrastructure where common software services and middleware can run. For example, the application layer could be built on Red Hat OpenShift and have one or more IBM Cloud Paks installed on it where deployed containers run.
We will now look at how products such as Maximo Visual Inspection, Multi Cloud Manager, IBM Video Analytics and IBM Edge Application Manager can be used to create a full end to end solution.
Creating a model using Maximo Visual Inspection
Maximo Visual Inspection is a video and image analysis platform that makes it easy for subject matter experts to train and deploy image classification and object detection models. We will see how to build a hardhat detection model using Maximo Visual Inspection.
- Create a set of videos with individuals wearing hardhat. Make sure to include varied scenarios with different lighting conditions.
- Log on to Maximo Visual Inspection, and click on Data Sets in the top left corner to create a dataset.
- Click on Create a new data set and provide a name like
hardhat datasetfor the data set.
- Import the images or videos that you created in step 1.
- To create an object detection model, click Objects in the menu on the left, and click Add Objects to create objects. Create an object called
hardhat. If you have different colored hats that you want to recognize, you can create an object for each like
Click the Label Objects button. For videos in your data set, you can use the Auto Capture button to capture frames at desired time intervals. For each frame, click Box, and choose the hardhat object that you just created, and draw a box around the hardhat. Repeat this step for all frames.
In general, the larger the data set, the better the accuracy of the model will be. If you do not have a lot of data, you can use the Augment Data button to create additional images using filters such as flip, blur, rotate, and so on.
- Once you are done labeling the images, click Train Model, and select the type of training as Object detection. You can choose from a number of options to optimize your model training and click the Train button. The training time depends on the size of data, type of model, and additional options selected.
- Once the model is trained, click the Deploy button. You can name your deployed model and choose to export it. Then, download the exported model as a zip file.
- The deployed hardhat model now appears in the Deployed Models tab where you can test the model either using the API endpoint displayed or by clicking the Open button and uploading a video to test if the hardhats are being detected.
Containerizing the model using the Maximo Visual Inspection Inference server
Maximo Visual Inspection Inference server is a server that lets you quickly and easily deploy multiple trained models. We will use the inference server to create a docker image of the hardhat model. This allows you to make the hardhat model available to others, such as customers or collaborators and ability to run the model on other systems.
To install the inference server on a machine, download the latest Maximo Visual Inspection Inference software. Navigate to IVI inference folder to install inference server. See the installation documentation for detailed instructions.
cd visual-insight-infer-x86-188.8.131.52-ppa/ sudo yum install ./visual-insights-inference-184.108.40.206-455.5998b55.x86_64.rpm /opt/ibm/vision-inference/bin/load_images.sh -f visual-insights-inference-x86_64-containers-220.127.116.11.tar
deploy_zip_model.shscript to deploy a model exported from Maximo Visual Inspection on this system.
/opt/ibm/vision-inference/bin/deploy_zip_model.sh --model model_name --port port_number --gpu GPU_number location_of_exported_IVI_model
/opt/ibm/vision-inference/bin/deploy_zip_model.sh--model Hardhatmodel --port 6002 --gpu 0 /root/Hardhatmodel.zip
This command creates a Docker container.
Using the Docker container, create a Docker image. To do so, first obtain the container’s ID and then commit the Docker image:
docker ps | grep model_name
Copy the container ID from the output, and specify it on this command:
docker commit <container-id> docker_image_name:tag
For example, for our
hardhatmodel, the Docker commit command might look like this:
docker commit <container-id> hardhatmodel:v1
Save the Docker image you created in the above step and zip it to create a
.tgzfile using the following command:
docker save docker_image_name:tag | gzip > file_name.tgz
docker save hardhatmodel:v1 | gzip > Hardhatmodel.tgz
You can now move this .tgz file to any other system and run a
docker loadcommand to load the Docker image onto that system.
docker load < hardhatmodel.tgz
Deploying our model to the edge servers using IBM Cloud Pak for Multicloud Management
The IBM Cloud Pak for Multicloud Management, which runs on Red Hat OpenShift, provides consistent visibility, governance, and automation from on premises to the edge. Using IBM Cloud Pak for Multicloud Management, the operator can have rich views of how clusters operate within the environment. You can use this tutorial on IBM Cloud Garage to learn how to deploy and manage applications across clusters using IBM Cloud Pak for Multicloud Management.
To implement out use case, the hardhat model that you created in the previous section needs to be deployed to the edge servers. In our case, the model is deployed to IBM Cloud Private. The previously created hardhat model (in the
.tgz file) is loaded on IBM Cloud Pak for Multicloud Management, and then can be deployed to multiple clusters using helm charts.
In the following steps, we will go through the process of deploying these Docker images to IBM Cloud Private using the helm charts.
Deploying the model from IBM Cloud Pak for Multicloud Management
Login to IBM Cloud Pak for Multicloud Management, and ssh into the system.
Add the Docker image to the IBM Cloud Pak for Multicloud Management Private repository:
docker login <mcm-docker-repo> docker load < hardhat.tgz docker tag <img>:<tag> <mcm-docker-repo>/default/<img>:<tag> docker push ibm.cloud:8500/default/<img>:<tag>
.tgzyou create in the previous section. Make sure the file is transferred to IBM Cloud Pak for Multicloud Management.
Add image policies on the target cluster, which in our case is IBM Cloud Private. Log in to the target cluster’s IBM Cloud Private, and navigate to Manage > Resource Security > Image Policies > Add Image Policy. Then, add a name and a scope as
cluster, and then add the registry as IBM Cloud Pak for Multicloud Management Private repo
Add the private repo and the
ca.crtfile on the target cluster’s file system.
SSH into the target cluster:
On the target cluster, create a directory for the private repo in the
ca.crtfrom the hub cluster to the target cluster. On the local machine run this command:
scp <mcm-user>@<mcm-ip-address>:/etc/docker/certs.d/<mcm-docker-repo>/ca.crt <icp-user>@<icp-ip-address>:/etc/docker/certs.d/<mcm-docker-repo>/
Run the following command in both the target cluster and the hub cluster to create the pull secret that is then used in the
deployment.yamlfile of the helm chart:
kubectl create secret docker-registry <secret-name>--docker-server=<mcm-docker-repo> --docker-username=<username> --docker-password=<password>
Add the IBM Cloud Pak for Multicloud Management IP address to the IBM Cloud Private hosts file:
Add a line like this with the IP address and host name:
Creating and publishing the helm chart
Create a Helm Chart Repository using the following command. Use all lowercase letters for its name. This command automatically generates sample yaml files including
helm create my-app
chart.yamlfile to specify the custom name and version (as you can see in the screen shot below).
values.yamlfile to update the Docker image and node port information (as you can see in the screen shot below).
deployment.yamlfile in the templates folder to add any additional parameters like GPUs in the resources section of yaml file.
The following screen shot shows all four
.yaml files that were created for our hardhat scenario.
Now, you need to package and publish the helm chart.
Display the property values set for the helm chart by using the helm template command:
helm template my-app
Change my-app to be whatever you used for your helm chart repository name.
Package your helm chart into a
helm package my-app
Create a public GitHub repo and clone it to your local folder.
git clone <GitHub URL>
Create an empty
index.yamlfile and push it to the repo:
touch index.yaml git add index.yaml git commit -a -m “add index.yaml” git push
Add the helm chart to the GitHub repo and edit the
helm repo index helm-example/ --URL “GitHub URL”
Your GitHub repo now has the helm package (
.tgzfile) and ththe
Add the helm repository to IBM Cloud Pak for Multicloud Management. In your browser for IBM Cloud Pak for Multicloud Management, navigate to Manage > Helm Repositories > Add Repository >
Publish the helm chart from IBM Cloud Pak for Multicloud Management to IBM Cloud Private. Navigate to the Catalog, search for and click on your chart name. Then, click Configure and select the IBM Cloud Private that is linked to your IBM Cloud Pak for Multicloud Management. Finally, Navigate to Workloads > helm release section to find your release.
Now that we trained a model and deployed it to the edge server, you can now use that model to recognize hard hats.
Use the trained model to recognize hard hats using IBM Video Analytics
Video data can be processed at the edge, either at the application layer or the device layer. Processing video data at the edge can help reduce latency, lower bandwidth consumption, and enable the user to make faster and informed decisions.
IBM Video Analytics is used to manage the video stream from a camera. It is also used to define an object to detect as well as the area to designate as a danger zone. Once it detects a person entering the danger zone area, it makes a call to the Maximo Visual Inspection hard hat model to determine whether that individual is wearing a hard hat. If a person is not wearing a hard hat, IBM Video Analytics fires an alert.
You’ll need to install and configure these key components of IBM Video Analytics:
- Metadata Ingestion, Lookup, and Signaling
- Semantic Streams Engine
- Deep Learning Engine
These components can be set up to run at the application layer on a single server.
Configure a channel. Set up a camera view where a danger zone can be defined and a person can be detected when entering the defined area.
Configure an analytics profile. Set up a new or update an existing
AnalyticProfilefor tracking whether a person is wearing a hard hat. The following example illustrates a HardHat Tracking profile that will process analytics result from Maximo Visual Inspection, dump the result image in the specified directory, and trigger a
tripwirealert if no white or blue hard hat were found.
Configure your alerts. Set up at least one type of alert, such as a
regionalert to define the danger zone area. The figure below shows sample screens for a HardHat Tracking analytic profile being registered and assigned and how a tripwire alert can be configured to define an area of interest.
Configure the Deep Learning Engine in IBM Video Analytics to call the deployed model in Maximo Visual Inspection. This can be useful if you are running this engine on a system without a GPU and you have installed Maximo Visual Inspection on a separate system with a GPU. The Deep Learning Engine in IBM Video Analytics can run local models and remote Maximo Visual Inspection models. In order to use any model in IBM Visual Analytics, the model must be configured in the Deep Learning Engine configuration files that include docker compose YAML, nginx, and JSON for each model as shown in the figure below. For more information, see the IBM Video Analytics documentation on Managing Models in the Deep Learning Engine.
When you are done configuring the components, restart IBM Video Analytics. Then, use the command line interface to verify that the Deep Learning Engine can call Maximo Visual Inspection successfully. For example, substitute the image file name and URL with your set up to run the following commands.
Verify a direct call to Maximo Visual Inspection running on a server, for example svrX, port 6005:
curl -F "firstname.lastname@example.org" http://<svrX>:6005/inference
Verify a Deep Learning Engine call to Maximo Visual Inspection:
curl -F "email@example.com" http://localhost:14001/detect/hardhat
Implementing the device layer
The edge device layer will contain devices that have compute and storage power and can run containers. These devices can run relatively simple applications to gather information, run analytics, apply AI rules, and even store some data locally to support operations at the edge. The devices could handle analysis and real-time inferencing without involvement of the edge server or the enterprise region.
Devices can be small. Examples include smart thermostats, smart doorbells, home cameras or cameras on automobiles, and augmented reality or virtual reality glasses. Devices can also be large, such as industrial robots, automobiles, smart buildings, and oil platforms. Edge computing analyzes the data at the device source.
The primary product for the device layer is IBM Edge Application Manager. IBM Edge Application Manager provides a new architecture for edge node management. With IBM Edge Application Manager, you can quickly, autonomously, and securely deploy and manage enterprise application workloads at the edge and at massive scale.
On the device layer, any tools or components must be able to manage workloads placed across clusters and the device edge. While many edge devices are capable of running sophisticated workloads such as machine learning, video analytics and IoT services, if the workload is too large for the device layer, the workload should be placed at the application layer. The use of open-source components is key at the device layer, because the portability of our edge solution is key across private, public, and edge clouds.
In our use case, we are using Jetson TX2 as the smart camera. To implement the use case, this edge device needs to be registered to IBM Edge Application Manager.
In this section, we will go through steps involved in installing the Open Horizon agent on our device and registering the device to IBM Edge Application Manager Exchange so that we can deploy models on the device. Once our TX2 device is registered to IBM Edge Application Manager, the object detection YOLO model can be deployed which can then help identify human beings in the danger zone and start the stream to the server.
Configure your edge device
Log in to the device, and run the following command to switch to a user that has root privileges:
Verify that your Docker version is
18.06.01-ceor later. Some Linux distributions can be set up to run older Docker versions. Run the
docker –versioncommand to check your installed Docker version. If necessary, update to the current version of Docker by running the following commands:
curl -fsSL get.docker.com | sh
Run the docker version command again:
You should see output similar to this:
Docker version 18.06.1-ce, build e68fc7a
Install the Open Horizon agent on the device. Copy the following three relevant Horizon Debian packages for your operating system and architecture:
bluehorizonfrom the server where IBM Edge Application Manager is installed to your device. These packages are in the
ibm-edge-computing-x86_64-<VERSION>.tar.gzrelease file. After you’ve installed IBM Edge Application Manager on the server, these required packages are located in the following directory:
/ibm-edge-computing-x86_64-<VERSION>/horizon-edge-packages/linux/<OS>/<ARCH>/. Install the copied Horizon Debian packages by running the one of the following commands (which show our TX2 device):
dpkg -i *horizon*.deb apt install ./*horizon*.deb
Register the device to IBM Edge Application Manager
Stop the agent.
systemctl stop horizon.service
Point your edge device horizon agent to IBM Edge Application Manager by creating or editing
/etc/default/horizonwith this content (substituting the value for $ICP_URL that you used above):
Edit the following values with their respective values:
sudo cp icp.crt /usr/local/share/ca-certificates && sudo update-ca-certificates
Restart the agent by running the following command:
systemctl restart horizon.service
Verify the agent is running and properly configured by issuing these commands:
hzn version hzn exchange version hzn node list
To create an api key:
cloudctl login <ICP_URL> cloudctl iam api-key-create iamapikey
Set these environment variables. Copy the API key that is generated after running the above command:
export ICP_URL=<ICP_URL>' export HZN_ORG_ID=IBM export HZN_EXCHANGE_USER_AUTH='<apikey-name>:<apikey-value>'
Confirm the node with the IBM Edge Application Manager. Verify that the environment variables are set correctly.
hzn exchange user list
View the list of sample edge service deployment patterns by using either of these commands:
hzn exchange pattern list
Or, this one:
hzn exchange pattern list HZN_ORG_ID
At this point your edge device is linked to IBM Edge Application Manager. Run the following commands to register your device to IBM Edge Application Manager to register the services, patterns, and policies. Create a unique node ID and token for each device in
export HZN_EXCHANGE_NODE_AUTH="gsctx2nov27:gsctx2tokennov27" hzn exchange node create -n $HZN_EXCHANGE_NODE_AUTH hzn exchange node confirm
Register patterns and deploy models to your edge device
Now that the edge device is registered to IBM Edge Application Manager, we can register edge patterns from the exchange server. An edge pattern is a descriptor file that describes which docker images to be downloaded and how they should be run on the device. Registering patterns on the device downloads the associated services and docker images that are required to run the corresponding models on the edge device. These patterns and services are architecture specific.
Get a list of all the edge patterns on the exchange using the following command:
hzn exchange pattern list
Register a pattern or service from the above list of the patterns that are available on IBM Edge Application Manager:
hzn register -p pattern-SERVICE_NAME-$(hzn architecture)
hzn register -p IBM/pattern-ibm.yolo
Look for the agreement list to see the status of registered services. This agreement status shows the hand-off between the device and exchange server. The creation of the agreements normally is received and accepted in less than a minute. When an agreement is accepted, the corresponding containers can begin running. The Horizon agent must first complete a docker pull operation on each Docker container image. The agent must also verify the cryptographic signature with Horizon exchange. After the container images for the agreement are downloaded and verified, an appropriate Docker network is created for the images. Then, the containers can run. When the containers are running, you can view the container image status by running the docker
hzn agreement list
Optionally, you can unregister the current running pattern such that you can deploy a different pattern. Unregistering a pattern means stopping the running containers on the edge device and restarting the horizon service to make the device available to accept new patterns. To unregister a pattern:
hzn unregister -f
We have now deployed the object detection (YOLO) model on the devices and now the devices are ready to deploy any further models. With the YOLO model deployed on the TX2, whenever the camera detects a person, we can start video streaming to the server.
Summary and next steps
We covered two key components of the edge: the application layer and the device layer. Connectivity to the edge is a key component required to successfully implement the edge. In many cases, the edge will be implemented where connectivity is not available or is not sufficient to meet the low latency requirements for the edge nodes. In such cases, the key network components have to be deployed on the edge.
Our next article in this edge computing series dives deeper into the network edge and the tooling that is needed to implement it. This article discusses how the different layers come together using a use case that requires all three layers: application, device, and network.