IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Check out this social good project created by students who attended SacHacks 2020 hackathon.


This post takes an in-depth look at a social good project created by students who attended the recent SacHacks 2020 collegiate hackathon, which was co-sponsored by IBM.

Our team, composed of Christina Huang, Kavihesha Kanagalingam, and Thomas Munduchira from UC Davis, as well as Asim Biswal from UC Berkeley, spent a weekend at SacHacks 2020 creating Unfeel the Burn. This deep learning platform leverages IBM Z to classify images of burns and recommend treatments. Unfortunately, thousands of people around the world mistreat their skin burns, which at times results in infections, hypothermia, burn wound oedema, or cellulitis. When a burn wound is handled improperly, these infections can lead to blood poisoning or death. The platform we created over the course of the collegiate hackathon enables anyone to quickly understand the steps required to correctly treat their burn and prevent any further harm. We are delighted to have received First Place for the IBM Z track and Third Place Overall for our efforts. This blog post outlines our 24-hour journey at the event — from hatching the initial idea to executing on our vision to getting the platform working end-to-end with the help of IBM Z.

The four of us met up for the first time at the event itself, and did not have a set plan as to what we wanted to accomplish. We had relatively diverse backgrounds in computer science, electrical engineering, and statistics, and as such, there were numerous possibilities as to where we could take this project. We agreed to pursue an effort centered around healthcare and social good. We were also heavily inspired by the IBM Z technical workshop, which was hosted during the hackathon, and how easy it was to run machine learning workloads on top of the platform.

Figure 1. Our team brainstorming project ideas during the IBM Z technical workshop

Our team brainstorming project ideas during the IBM Z technical workshop

A bit more deliberation helped us land at our final vision: a public web application that allows a user to upload an image of their burn and have it return a diagnosis on the severity of the injury (whether the burn is superficial / first degree, or partial thickness / second degree, or full thickness / third degree) as well as recommended treatment options.

We faced a few challenges from the outset. Due to the lack of accessible burn wound datasets, we had to educate ourselves on classifying burn wounds in order to manually label and compile a dataset of images sourced from the web. The sandboxed IBM Z systems made available to us at the event had storage limitations and did not have machine learning modules such as TensorFlow installed. The trial system was built for educational purposes rather than our more intensive use case of training a deep learning model for image classification. We overcame these challenges by resizing the images to fit within the 100 MB of local storage, and then tapped into deep learning via the scikit-learn module.

Things were streamlined after this point. We were able to run Python code via a Jupyter Notebook, executing on IBM Z to train a neural network with a fully connected architecture to classify the burn images. This setup enabled us to achieve a test accuracy of 45%.

Fully connected neural network:

  • Train accuracy: 72%
  • Test accuracy: 45%

While we could not run a convolutional neural network via scikit-learn, a test outside of the sandboxed IBM Z environment with such an architecture via Keras allowed us to achieve a test accuracy of 68%.

Convolutional neural network:

  • Train accuracy: 86%
  • Test accuracy: 68%

These accuracy scores are greatly aided by the investments we made in image processing — we used techniques such as content-aware resizing as well as foreground extraction to filter the image data and ease the classification task.

The two photos in Figure 2 illustrate the benefit of content-aware resizing. While the original image is cropped to a more workable size, the important aspects are still retained: The castle is unchanged, the person on the left still appears in the resized image.

Figure 2. Content-aware resizing

Content-aware resizing

Figure 3 illustrates how the foreground extraction process removes the noise that’s present in the background of an image.

Figure 3. Foreground extraction process

Foreground extraction process

We then exported the trained model from the IBM Z environment and plugged it into a Flask backend running on IBM Cloud Foundry. The client talks to this backend to send over the image of a burn, after which the trained model is queried to get the classification result. The diagnosis and treatment steps are then clearly outlined for the user.

This code listing below shows how we train the model on IBM Z and then move it over to our backend. You can see what goes on behind the scenes in our source code repository.

# Training the model on IBM Z.
from sklearn.neural_network import MLPClassifier
classifier = MLPClassifier(...)
classifier.fit(X, y)

# Exporting the trained model.
import pickle
model_file = open('model.py', 'wb')
pickle.dump(classifier, model_file)

# Importing the model and using it for classification on the backend.
model_file = open('model.py', 'rb')
classifier = pickle.load(model_file)
prediction = classifier.predict(img)

Our full tech stack is illustrated in Figure 4. We used OpenCV and scikit-image to preprocess image data before model training and querying. After training our scikit-learn FCNN model within the IBM Z environment, we serialized the model and exported it for use by our backend. We configured our Flask backend to use the model to query images and then return predictions to the client. Our UI, formatted with Bootstrap, is responsive on mobile and desktop to provide the optimal experience for users.

Figure 4. The tech stack

The tech stack

Since we spun the project up over just 24 hours, there are still several steps we can take to improve the accuracy and usability of the prototype we have now. First and foremost, we need to vastly expand the dataset we are working with and also ensure there is a diversity of data that accounts for burn location, burn size, skin tone, and beyond. Image augmentation would also be a good way of supplementing the training data and improving model performance by applying transformations upon the existing dataset. Escaping the air-gapped environment and moving to a full-fledged IBM Z system would allow us to integrate IBM Z as part of the request pipeline and query the model directly.

Figure 5. Accepting our first place prizes from the IBM Z team during the closing ceremony!

Accepting our first place prizes from the IBM Z team during the closing ceremony!

We are deeply grateful to IBM Z for their presence at SacHacks 2020, and for enabling developers like us to leverage their platform to create solutions that can positively impact society. Ramping up on IBM Z and running a machine learning workload on top of it was a streamlined experience for us, and we look forward to using the platform for our future computational needs.