Explore the details of an edge computing architecture for a quality inspection system

In our previous article, “Enabling distributed AI for quality inspection in manufacturing with edge computing,” we provided an overview of an AI-assisted quality inspection system. In this article, we will dive deeper into the architecture and development of the final edge computing solution.

The following figure shows the system context diagram for this AI-assisted quality inspection system.

Figure 1. System context diagram of quality inspection system

System context diagram of our quality inspection system

The main function of our edge computing project is to distribute AI models from cloud to edge devices used in the manufacturing area. The solution uses IBM Edge Application Manager (IEAM) for deploying AI workloads to edge devices, IBM Maximo Visual Inspection (previously known as IBM PowerAI Vision) for model training, and NVIDIA Jetson TX2 devices for inferencing on the edge. Technical details on IEAM and AI models trained in Maximo Visual Inspection are also discussed. This solution is being deployed in production, combining IBM AI and Edge products, while scaling out edge devices, to be used in manufacturing of IBM systems.

The intended audience for this article is architects, developers, and administrators who are engaged in edge computing projects.

High level architecture for our edge computing solution

Figure 2 shows the quality inspection system architecture, based on the IBM Edge Computing reference architecture, in which components are implemented across the following two environments:

  • An enterprise hybrid cloud environment
  • A set of edge services

The enterprise hybrid cloud environment is responsible for model training, management, extraction, and deployment. It also has the capability to manage devices, roles, and users.

The edge device components are responsible for AI inferencing, reporting inference results, and providing device status information.

Figure 2. Solution architecture based on the IBM Edge Computing reference architecture

Solution architecture based on the IBM Edge Computing reference architecture

In this solution architecture, the quality inspection system that we developed included these components: a main application, model extraction service, model management service, model repository, AI models, and an edge application. IBM Edge Application manager provided the following components in our architecture: a management hub, an edge agent, a device registry, a model repository, and a multicloud.

Let’s talk through the major components of our edge computing solution architecture:

  • Cloud infrastructure
  • Edge computing
  • Data model
  • Edge device monitoring
  • Authentication and authorization
  • Edge services

Cloud infrastructure

One of the first requirements in our edge computing solution is the availability of our main quality inspection application and its associated microservices, which are responsible for model extraction and deployment, user management, and edge device monitoring. The main application and its associated microservices, running in Docker containers, are developed using the IBM Cloud Kubernetes Service to ensure high availability. We are in the process of migrating to the Red Hat OpenShift Container Platform.

Edge computing

IBM Edge Application Manager (IEAM) is an edge computing platform for managing multiple services on top of tens of thousands of edge devices. IEAM is based on the open source project, Open Horizon, which is an LF Edge project (LF Edge is an umbrella organization of The Linux Foundation).

The IEAM command line interface (hzn command) can be used to manage devices and deploy new patterns and services. However, IEAM also supports REST API calls with the same functionality as the IEAM CLI. By using the REST APIs, we were able to create an integrated service, edge connector, without using multiple shell procedures.

Data model

The data model for the edge computing solution includes this data:

  • IBM Maximo Visual Inspection credentials (to support multiple installations)
  • User roles support
  • Rich inference results
  • List of devices, their configuration, and utilization
  • List of deployed services

We currently use MongoDB in IBM Cloud® as the database to store the data. Other cloud storage services or on-premises data storage can be considered as well.

Edge device monitoring

We used the MQ Telemetry Transport (MQTT) protocol to collect information about all edge devices. MQTT is a lightweight protocol suitable for collecting status metadata information. Devices generate resource utilization (RAM, HDD, CPU) data and produce information about inference results.

Authentication and authorization

For authentication, we used an enterprise single sign-on (SSO) solution to authenticate users in both the main application in IBM Cloud and in the edge application on the edge devices. Users are authorized based on defined roles to perform allowed actions.

Edge microservices

To fully leverage the advantages of IEAM, we containerized the edge application to easily and efficiently deploy it into multiple edge devices. In particular, we split the edge application into the following edge services (refer to Figure 3):

  • model-sync service. This service is responsible for downloading a specified model version from object storage, decrypting and unpacking it, and making it available for inference work.
  • model-detector service. Inference service for making AI analysis jobs with capability of switching between different types of models.
  • router. This is an edge service that provides an integration point between the user interface and the deployed model.
  • Edge Dashboard. This UI allows the quality inspector to perform inferencing.
  • Auth-service. This service provides user authorization to use the Edge Dashboard.
  • Edge-Mon. This service enables edge device monitoring through MQTT.
Figure 3. Edge microservices architecture

Edge microservices architecture

Communicating from edge devices with the IEAM server using IEAM APIs

IBM Edge Application Manager runs on the Red Hat OpenShift Container Platform and provides the capability to manage multiple services on edge devices and edge clusters. In the context of our project, IEAM provided the capability to orchestrate three microservices on top of the Jetson TX2 devices. It is responsible for devices and services management at the application level.

IEAM uses agents on various devices for managing services. Services are represented as Docker containers, and we were able to use the same best practices for microservices development as could be used in cloud and virtualization development settings. As a result, we were able to decrease traffic from the manufacturing area to the cloud by organizing localized data processing using models from Maximo Visual Inspection.

The Horizon command line interface (hzn CLI) can be used to configure edge devices and register new patterns and services. This is fine for testing, however, the scripting interface and the CLI are not suited for production use. For example, a block of operations and interactions with the hzn CLI has to be invoked to communicate with the edge devices (for example, deploying services, getting the list of devices, and so on).

For production use, a Node.js module communicates with the IEAM server over REST APIs without interacting with the hzn CLI on the edge device. The basic flow between the edge device and management hub is described in the remainder of this section.

To send most of the requests to the IEAM management hub, the following information was required:

  • Organization ID (HZN_ORG_ID)
  • Authorized user credentials (HZN_EXCHANGE_USER_AUTH)

The IEAM client API connects to IEAM Exchange service as the main component of managing IEAM entities and to IEAM Cloud Sync Service (CSS), where models are stored.

We used HZN_EXCHANGE_API and HZN_MMS_API (imported as environment variables) as base URLs for our API connections. Here are the example values of these URLs:

  • HZN_EXCHANGE_API=https://<ieam_service_host>: <ieam_service_port>/ec-exchange/v1/
  • HZN_MMS_API=https://<ieam_service_host>: <ieam_service_port>/ec-css/.

Optimizing and training models with Maximo Visual Inspection

Our edge computing solution is a distributed AI for quality inspection in manufacturing with edge computing. The main function of the project is to distribute AI models from cloud to edge devices used in the manufacturing area. The models are trained by IBM Maximo Visual Inspection using training data set collected from actual quality inspection in IBM manufacturing facility.

Both Jetson TX2 and Jetson Nano were evaluated as the edge device candidate. It was confirmed that Jetson Nano was not suitable for Faster R-CNN model inference because of insufficient memory, even with disk buffering.

IBM Maximo Visual Inspection provides a user-friendly interface to create and train computer vision models. It provides different types of optimization and can train various model types, including:

  • Faster R-CNN
  • Tiny YOLO v2
  • Detectron
  • Single Shot Detector (SSD)

IBM Maximo Visual Inspection also enables users to optimize different model hyperparameters, such as:

  • max iteration
  • momentum
  • ratio
  • learning rate
  • weight decay

To improve model accuracy, different data augmentation methods can be applied during model training using Maximo Visual Inspection:

  • Blur
  • Sharpen
  • Color
  • Crop
  • Vertical flip
  • Horizontal flip
  • Rotate
  • Noise
  • Image size

The inference on Jetson TX2 works in the form of the Docker container. The Docker image is based on Ubuntu 18.04 with Python packages for inference installed (caffe, py-faster-rcnn, pytorch, detectron). To run the inference from the remote host, we use the uwsgi web server and the Python Flask framework to enable REST APIs:

  • GET REST API is used to get the status of the inference container:

    <Jetson IP address>:8081/inference

    The return example:

      Ready - Indicates that the service is ready for inference POST request
      Busy - Indicates that an active inference process is currently running
    
  • POST REST API is used for running the inference. It takes two parameters (model and image):

      "Content-Type: application/json" -X POST <Jetson IP address>:8081/inference -d '{"model":"<model_id>", "image": "<imageName>"}'
    

    And, it returns the list of detected bad connectors with the coordinates and confidence level:

      [{'label': 'fbends', 'confidence': 1.0, 'xmin': 372, 'ymin': 189, 'xmax': 389, 'ymax': 211}, {'label': 'fbends', 'confidence': 0.99, 'xmin': 55, 'ymin': 282, 'xmax': 71, 'ymax': 303},{'label': 'fbends', 'confidence': 0.96, 'xmin': 65, 'ymin': 283, 'xmax': 80, 'ymax': 300}]
    

    We used the data set with 424 JPG files with two types of labels for model training:

  • fbends: 1118 (bad connectors)

  • good: 62 (good connectors)

Four models have been trained in Maximo Visual Inspection based on the data set above and used on Jetson TX2 for the inference with the following results.

  • Faster R-CNN and Detectron models are optimized for accuracy, and the inferencing results show high accuracy using our testing set. Both models use rectangular bounding boxes to label the objects. Detectron models can also use objects labeled with polygons (segmentation) for greater training accuracy. However, training a data set that uses polygon labels takes longer than training with rectangular bounding boxes. In our scenario, we disabled segmentation: Maximo Visual Inspection used rectangles instead of polygons.

  • The low inferencing accuracy from Tiny YOLO and SSD is also expected. The Tiny YOLO version 2 model is primarily optimized for speed can be run anywhere but might not be as accurate as those optimized for accuracy, especially for use case such as ours in which very small objective is classified. The SSD model is suitable for real-time inference and embedded devices. It is almost as fast as YOLO but not as accurate as Faster R-CNN.

The training and inference results for each model is summarized in the following table and also in bar charts in Figure 4.

Inference memory usage (GB) Training time (hours) Accuracy (%) Model size (MB)
Faster R-CNN 2.140 1.0 97 546.9
SSD 0.925 1.5 48 107.3
Detectron 3.197 0.33 99 338.1
Tiny YOLO v2 N/A 7.0 1 63.1
Summary of model training and inferencing results

Summary of model training and inferencing results

Summary and next steps

In this article, we discussed a solution architecture for distributed AI for quality inspection in manufacturing, as well as technical details on IBM Edge Application Manager CLI, API, and AI models trained in IBM Maximo Visual Inspection. We hope this article would be helpful to readers who are in the process of developing edge computing applications.

In our next article, we will do another deep-dive on edge device inferencing performance benchmarking with respect to different types of AI models, which is critical for edge device selection in an edge computing application development.

You can contact Christine Ouyang (couyang@us.ibm.com) for more information on this solution.

Acknowledgements

The authors would like to acknowledge the contributions and reviews of this article by Ekaterina Krivtsova and Dmitry Gorbachev.