Win $20,000. Help build the future of education. Answer the call. Learn more

Use AI to assess construction quality issues that impact home safety

Hurricanes and typhoons are growing stronger as a result of climate change, and together with earthquakes they continue to disproportionately impact people living in emerging nations where substandard housing is exceptionally commonplace. ISAC-SIMO helps builders, local officials, and homeowners assess the construction quality of newly built or retrofitted homes and detect construction issues before they become life-threatening.

Build Change, an organization dedicated to preventing housing loss caused by disasters, placed second in the 2018 Call for Code Global Challenge with their solution PD3R (Post-Disaster Rapid Response Retrofit). With the support of Call for Code, Build Change has extended PD3R technology to develop ISAC-SIMO, which is now hosted by The Linux® Foundation as an open source project.

Learning objectives

In this tutorial, you learn how to:

  • Run the ISAC-SIMO app
  • Test existing checks developed using visual recognition models and TensorFlow machine learning models that are trained with construction material images to assess the quality of elements in your own building
  • Customize ISAC-SIMO by training a new model to create a new check

Prerequisites

  1. Register for an IBM Cloud account, and at the same time join the Call for Code community of over 400,000 developers to build new skills and contribute to open source projects supported by The Linux Foundation.
  2. The ISAC-SIMO mobile app can be tested on an Android device or using an Android emulator. Alternatively, the checks can also be tested directly through the web platform with or without an account registration.
  3. Register as an admin in ISAC-SIMO. Refer to the documentation page in ISAC-SIMO for more information.

Estimated time

It should take approximately 1 hour to complete this tutorial.

About ISAC-SIMO

ISAC stands for Intelligent Supervision Assistant for Construction. SIMO stands for its Spanish translation: Sistema Inteligente de Monitoreo de Obra.

Objective: The objective of ISAC-SIMO is to provide access to safe construction practices and quality checks to homeowners, builders, and local inspectors.

Housing typology: The tool was developed specifically for informal or non-engineered buildings of confined masonry typology that might not have pre-existing house designs or plans. Therefore, the quality checks have been tailored for a confined masonry building. However, the platform can be used to add new models and quality checks that can be applied to other contexts and construction types.

Technology: The technology comprises two components: a mobile application and a multifunctional web interface. The mobile app helps the field users to validate the quality of key construction elements. The web interface enables construction project staff to view the results from field users to quickly identify where an intervention is needed, and facilitates admin users to manage the checks and image-processing pipelines that are implemented in the back end.

Mobile and desktop interfaces

Quality checks: In the context of this tool, a “quality check” stands for the assessment of a specific construction element and its categorization as compliant or non-compliant (“Go” or “No Go”) as in the general recommended guidelines for confined masonry construction. To develop an automated quality assurance tool, the construction supervision task was broken down into a series of visual quality checks to individually assess the quality of key construction elements against the recommended construction guidelines.

The following image shows the components of a confined masonry house.

Components of a confined masonry house

Vision: ISAC-SIMO’s vision is to create an intelligent quality assurance tool to help promote safe construction practices around the world, especially in areas with a lack of technical assistance. To achieve this, we aim to create a catalog of checks that is applicable to a wide range of contexts around the world. We invite the open source community to contribute to this project and support the development of new checks, the improvement of both the mobile app and web platform interfaces, or crowdsourcing of the image data set for future machine learning training. Learn more about how you can contribute to ISAC-SIMO on the Call to Action for Developers page.

Steps

Use the app to test the existing models on your building

Get familiar with the ISAC-SIMO user experience by running the app and testing it on construction elements around you. To test the checks, you can either test directly through the web platform or use the stand-alone ISAC-SIMO mobile app.

Test with a browser

  1. Using any device, navigate to https://www.isac-simo.net/.
  2. Scroll down to the bottom of the page, and click TRY NOW.

    Join ISAC-SIMO and test an image

This opens a list of globally applicable checks that you can perform on construction elements of a confined masonry house. If you have access to some sample rebar images or are close to a rebar object, you can select one of the rebar checks. Alternatively, if you are close to a brick wall or have access to a sample brick wall image, you can select the wall check. Some examples of acceptable images for each check are shown in the following table.

Check Description Sample image
Rebar Rust Detection Checks for presence or absence of rust in rebar Rebar sample
Rebar Shape Checks for shape of rebar stirrup Rebar shape
Rebar Texture Checks for presence or absence of ribs in rebar Rebar texture
Wall Check Checks for the bond pattern (how the bricks are stacked) and the vertical and horizontal mortar joint thickness (bed and head joint thickness) in a brick wall Wall check
  1. Select the wanted check, and click Choose File to upload a picture similar to the examples shown previously. After it’s uploaded, the image can be further cropped or rotated before submitting by clicking Test Image.

    Test the image

  2. After it’s submitted, the image passes through the corresponding implementation pipeline of image processing, machine learning models, and postprocessing steps for the selected check. The final calculated result is shown in the app as “Go” if found to be compliant or “No-Go” if found to be non-compliant.

    Go result

Test with a mobile app

  1. Download the ISAC-SIMO mobile app, and install the app on an Android mobile device or an Android emulator.

  2. Go to the app, and click Skip Authentication to test the app without registering for an account. Alternatively, you can log in after registering for a general user account.

    Skip Authentication

  3. Select Quality Check, and choose one of the checks to test. By default, this loads the default checks listed under the global project.

    Quality Checks

  4. Upload an existing image or take a new picture following the provided instructions. Crop or adjust the image, if needed.

    Rebar shape instructions

  5. After it’s submitted, the image passes through the corresponding implementation pipeline of image processing, machine learning models, and postprocessing steps for the selected check. The final calculated result is shown in the app as “Go” if found to be compliant or “No-Go” if found to be non-compliant.

    Result screen

If you would like to learn more, you can find detailed mobile app usage documentation with images on the ISAC-SIMO documentation site. You can also review documentation on setting up checks, models, projects, and more.

Understand the building blocks of a quality check

You can use the ISAC-SIMO web interface to manage an existing check, add a new check, or crowdsource a new image data set for future machine learning training or quality check development. You can also use it to set up a new project to implement custom checks and models in the mobile app or add new users to the project.

Web platform dashboard

To set up a check in the web platform, the exact implementation pipeline might vary depending on the needs of each check, but generally, each quality check might include a combination of machine learning models (such as object detection, segmentation, or classification models) along with Python scripts to perform image processing or to compute the final result of an assessment. The exact number of stages used in a pipeline to implement a check can be adapted as needed to perform the assessment. In general, you can implement a three-step pipeline as follows to implement a check:

  1. Object detection (optional): What if users might be taking photos of a broad area on a construction site instead of just a single element? You could implement an object detection model to automatically detect the construction element of interest within an image or video of a construction site without requiring the user to crop out the specific region being assessed before submission. Alternatively, you can skip this step by adding the set of instructions for a user to follow while taking a picture on the site and specifying the object to be assessed.

  2. Preprocessing (optional): Implement a preprocessing Python script to 1) extract the region of interest based on the detected construction element (if step 1 is implemented); and 2) obtain a mask image using a segmentation model, opencv library, or other computer vision libraries before carrying out further analysis.

    If you are training a model that doesn’t need to assess all of the features of the original image (for instance, when assessing the shape of a rebar stirrup, you don’t need the texture or the color of the rebar to make the assessment), it might be easier to preprocess the raw image before further analysis or training. For instance, in the case of wall images, there can be a lot of variation in the texture and color of the bricks, which makes it harder to create a generalizable function for assessing the key features and to train an accurate model with a limited data set. In other cases, when using a data set composed of a mix of synthetically generated and real images, preprocessing the image helps create a uniform quality of data by bridging the gap in real and synthetic images.

  3. Postprocessing (required): Implement a postprocessing Python script to analyze the raw image, or the processed image from step 2, and compute compliance or non-compliance as per the requirements. You can use the postprocessing Python script to calculate the final output based on the predictions from the trained machine learning models or the results from the opencv library.

    Example implementation

Example implementations

Now, let’s look at some example implementations that were deployed for the wall check and rebar shapes check.

Wall check

  1. An image preprocessing step uses an Unet model to output a mask image showing the mortar regions. The mask image makes it easier to compute centroids and other features that are used to approximate the compliance of the bond pattern and the mortar joint thickness requirements.

  2. A postprocessing step is performed through a Python script to carry out further image processing using the opencv library. The mask image obtained from step 1 is used to compute features to assess the bond pattern and relative mortar joint thickness, and a “Go” or “No Go” summary is returned to the user.

    Wall check implementation Example outputs

Example outputs of postprocessing step in the notebook

Rebar shapes check

  1. An image preprocessing step is performed by a Python script using the opencv library.
  2. Postprocessing step 1 uses a custom IBM Watson Visual Recognition classification model to assess the shape of the rebar.
  3. Postprocessing step 2 uses a custom Visual Recognition object detection model to detect the presence or absence of u-shaped hooks.
  4. Postprocessing step 3 uses a Python script to compute the final “Go” or “No Go” output based on the results from previous steps.

    Rebar implementation

Customize, potentially with your own new model

You can customize ISAC-SIMO with new machine learning models or Python scripts to build on the catalog of quality checks that help assess key construction elements. You can use the models to detect key construction elements from an image or video or classify images into compliant (Go) or non-compliant (No Go) categories. By contributing new models, you can help make ISAC-SIMO a robust tool with a wide catalog of quality checks that are accessible by homeowners, builders, and local authorities to enable safe construction practices in areas with a lack of technical support.

Prepare the data set

The exact methodology for data set preparation can vary depending on the type of model selected for developing a new check. The following steps show an example of preparing a data set for image classification, object detection, and semantic segmentation.

Preparing data set for classification or object detection models

For an image classification or an object detection model, you can prepare the data set using IBM Cloud Annotations. Use the following steps to prepare the training data, or read more details on the best practices for training data preparation.

  1. Sign up for an IBM Cloud account, and navigate to cloud.annotations.ai.

  2. Click Continue with IBM Cloud, and log in with your IBM Cloud credentials.

    Cloud Annotations page

  3. After you’re logged in, if you don’t have an object storage instance, you are prompted to create one. Click Get started to be directed to IBM Cloud, where you can create a free object storage instance. After your object storage instance has been provisioned, navigate back to cloud.annotations.ai, and refresh the page.

    No object storage instance

  4. The files and annotations are stored in a bucket. To create one, click Start a new project.

    Start a new project

  5. Add images to Go or No Go categories as applicable for training a classification model. In the following example, the images were labeled as Go and No Go to create a classifier to distinguish between rectangular rebar stirrups and non-rectangular or open rebars.

    Go and No Go images

    Note: In this example, the images used in the data set were either synthetically generated using 3D modeling software or sourced from the internet. In this specific check, the model is needed to classify the shape of the rebar and not the texture. For that reason, and to bridge the gap in synthetically generated and real images of rebar, the data set was processed using opencv library functions to obtain a clean inverted image showing the shape of the stirrup.

  6. For training an object detection model, after the images have been uploaded, the region of interest can be drawn out and labeled with the appropriate category, as shown in the following image.

    Hook detection

    In this example, the images are being labeled to create an object detection model to detect the u-shaped hook in a rebar stirrup (which approximates to a 135-degree angle).

Preparing the image data set for a segmentation model

For a semantic segmentation model, the data set comprises the raw image and the corresponding ground truth mask image, which can be prepared using photo editing tools such as Photoshop, or free tools such as Glimpse or GIMP. The following figure shows an example of mask images that are created using the Glimpse software to train an Unet model to extract the mortar regions in a wall image.

Raw and mask images

Mask images as shown in the previous example can be prepared using Glimpse with the following steps:

  1. Open the Glimpse application, and drag an image inside the window, which loads the image for editing.
  2. After it’s added, the image name appears under layers. Hide the image layer by clicking the eye icon. After hiding the layer, right-click, and select New from Visible.

    Hide layer

    New from visible

  3. With the new layer highlighted, click the bucket fill tool in the main menu. Set the foreground color to white, and set the background color to black. Then, set the bucket fill type to BG color fill, and click anywhere on the new image layer to fill the background with black.

    Fill the background

  4. Hide the background layer, and unhide the original image layer.

  5. Use the free select tool to start drawing the contours of the regions of interest (for example, the brick contours in the previous example images). Set the free select tool mode to the second option called add to the current selection.

    Free select tool

  6. Outline each object (for example, a brick), and after outlining one object, you can press enter to start with the next one. If you want to undo any selection, you can press backspace to go back one step or esc to cancel the outline for the object.

  7. After completing the outlining, unhide the background layer, and select the bucket fill tool by selecting FG color fill. Fill the object contours with white. The following image shows an example raw image of a brick wall and the completed mask image:

    Raw brick wall image

    Completed mask image

  8. After you’re done, you can save the file in the default format and also export it as a .jpg or .png file.

Train a model

Training a classification or an object detection model

Before training a classification or object detection model, you should already have labeled images in the Cloud Annotations online tool, following the steps in the data preparation section. There are many ways to train your model. You could train it from scratch using a framework like TensorFlow or PyTorch, use a drag-and-drop tool like Apple’s Create ML, or use a cloud-managed solution like Watson Machine Learning.

The following steps explain how to train a model using Google Colab, which is a hosted Jupyter Notebook service and doesn’t require any setup to use.

  1. Select the project that contains the labeled images you want to use for training in the Cloud Annotations tool.

    Select the project

  2. Select Train in Colab.

    Select Train in Colab

  3. Copy the provided credentials, and click Open Colab.

    Click Open Colab

  4. Before training a model, first access the training data in the Colab notebook by pasting the object storage bucket credentials copied from the previous step. You can then follow the instructions that are provided in the classification example notebook from the cloud annotations site to download the cloud annotations data in the notebook. Check out an object detection model example from Cloud Annotations.

    Setup

    pip install

    Read the data wrapper function

    The previous steps can remain the same in your notebook.

  5. Implement additional image preprocessing, if needed, using Tensorflow’s ImageDataGenerator.

    Image processing

  6. For building the model, there are several architectures that you can implement for a custom model. You can first try out the example that is shown in the notebook. Experiment with different options for deep learning architectures and frameworks. You can use transfer learning with ResNet or Mobilenet models that have pretrained weights on imagenet data sets, or create a new model from scratch and experiment with different architectures and hyperparameters to compare the accuracy of different models. You can also compare different deep learning frameworks to find out which is most suitable. Check out this sample notebook on transfer learning using mobile net and this article on building powerful image classification models with little data.

    Here is an example notebook for creating a convolutional neural network classification model for rebar shapes.

    CNN model example

    Mode layer

  7. Train and save the model that is created in the previous step and download the saved model.

    Train and save the model

  8. If the check you are developing requires more than one model for assessing the relevant features for a check (for instance, in the rebar shapes example, one model was used to classify the shape and another was used to detect the hooks), then you can train additional models before deploying them in the ISAC-SIMO platform.

Add a preprocessing step

If the model that you trained needs a preprocessed image instead of the raw image, you can add a Python script to include the preprocessing functions. Alternatively, if you have a segmentation model trained, you can add a Python script using the model to preprocess the image.

The following image shows an example Python script for preprocessing an image using opencv functions.

Example Python script

By adding this step in the implementation pipeline, the raw image is processed before it gets passed through the machine learning models. The following image shows an example of how the processed image would look.

Example processed image

The following image shows another example that uses a trained segmentation model to preprocess an image.

trained segmentation model example

Add a postprocessing step

If the check you are developing requires additional models to assess more features, you can add a Python script to combine the results of the models and output a final result. You can also add a Python script to perform further image analysis to compute the final result from a preprocessed image. You can test the script using a Jupyter Notebook, but before implementing the script, convert it to a .py file and comment the print outputs.

To display the final output in the mobile app based on the model results, add a “run” function at the end of the script, as shown in the following example.

Run function

Note: It is possible to add multiple postprocessing scripts and models to implement a check.

In the run function, return the final result and the score if applicable.

Deploy the model in ISAC-SIMO

Before deploying your model on ISAC-SIMO, register as an admin user. The account is then verified, and when complete, you are able to log in.

Register as an admin

  1. Navigate to the ISAC-SIMO dashboard.

  2. Add a new project (optional): If you want to add new checks that are relevant only to a specific context, or you want to deploy a specific set of checks for a project, you can add a new project before adding the checks. Alternatively, you can add the new check to the default project.

    Create project

  3. Add a label for the check, instructions to take a picture, and a sample image of how it should look.

    Create check

  4. To upload the trained offline model with a .h5 format, click File Upload, fill in the name, and upload the file. After the file is uploaded, copy the file path to update the model path in your Python script.

    File upload

  5. Select Offline Model/Scripts, and fill in the name, model type, and if the model corresponds to the preprocessing or the postprocessing step.

    Note: Select a processor for a segmentation model, object detect for an object detection model, and classifier for a classification model. If the model belongs to the preprocessing step within a check, then choose the preprocess option. Otherwise, choose the postprocess step.

    Offline Model Scripts

  6. Setting up image preprocessing and postprocessing pipeline: If you are implementing multiple models in a check, repeat steps 4 and 5 and add all of the applicable items: image-processing Python script, segmentation model, offline classification, or object detection model. Add the postprocessing script to compute the final “Go” or “No Go” output. Check out the Create/Add Object Types page for more detailed instructions.

  7. Test the newly added check end to end or test the individual outputs of the added models and scripts. You can refer to the Test Model page for further instructions.

Summary

In this tutorial, you learned how to run the ISAC-SIMO app, test existing checks developed using visual recognition models and TensorFlow machine learning models to assess the quality of elements in your own building, and customize ISAC-SIMO by training a new model to create a new check.

Learn more about how you can contribute to ISAC-SIMO and further the development of this project.

How will you answer the call to build and contribute to sustainable, open source technology projects that address social and humanitarian issues? Get involved with Call for Code open source projects.