Introduction

What if a Doctor had the option to determine, much faster and more precisely, what type of breast tumor they are analysing?

Would this help the patient, waiting for confirmation from the histopathologist on their future?
Definitely yes. The time that a patient is waiting for this confirmation can be a nightmare – for the patient and their family.

I have had this question for a time, and now with the support and cooperation of VRI Lab, Department of Informatics, part of Federal University of Paraná (UFPR) in Brazil, I have been given access to their labelled dataset of breast tumors. So, I had the “AI ammunition” to continue my quest.

Experiment

The supplied labeled image samples are generated from breast tissue biopsy slides, stained with hematoxylin and eosin (HE), prepared for histological study and labelled by pathologists of the P&D Lab and breast tumor specimens (~3µm thickness section) assessed by Immunohistochemistry (IHC).


Those Images were created with an Olympus BX-50 system microscope with a relay lens with magnification of 3.3× coupled to a Samsung digital color camera SCC-131AN. Some other characteristics of provided images:

  • magnification 40×, 100×, 200×, and 400× (objective lens 4×, 10×, 20×, and 40× with ocular lens 10×)
  • camera pixel size 6.5 µm
  • raw images without normalization or color standardization
  • resulting images saved in 3-channel RGB, 8-bit depth in each channel, PNG format

PNG images are much better for PowerAI Vision than JPEG, because there is no change in quality when the image is opened and saved again for data augmentation. PNG also handles detailed, high-contrast images, which reduces errors on the algorithm side.

For my use case I have chosen to do the classification of the images as follow:

B = Benign

  • Adenosis
  • Fibroadenoma
  • Tubular Adenoma
  • Phyllodes Tumor

M = Malignant

  • Ductal Carcinoma
  • Lobular Carcinoma
  • Mucinous Carcinoma (Colloid)
  • Papillary Carcinoma

For classification, we need a broader perspective of the sample tissue and therefore the 40x magnification makes more sense. Leaving us the option to use object detection at 400x magnification in indicating the relevant cells.

I uploaded all 1,628 images into PowerAI Vision (40x magnification, 500-700KB/image file size) in the eight appropriate categories shown in the snapshot below from the PowerAI Vision created datatset for this case.

I have retained 2 images from each category not to upload them to the AI Vision dataset in order to be used later on for validation of the model (inference).

The first training of the neural network gave me an accuracy of 88%, not bad for the first run over a 1,628 image dataset (which has been split into 1,312 for training and 316 for testing).

Because there is an interesting – almost linear – relationship between the amount of data required and the size of the model (model should be large enough to capture relations in your data, e.g. textures and shapes in images) Early layers of the model capture high-level relations between the different parts of the input (like edges and patterns). Later layers capture information that helps make the final determination; usually information that can help discriminate between the desired outputs. Therefore, if the complexity of the problem is high, like my image classification case, the number of parameters and the amount of data required is also very large. So the natural question is; how do I get more data if I don’t have “more data”?

To generate additional data, we just need to make minor alterations to the breast cancer dataset. Minor changes, such as flips or translations or rotations, will be a good option to fool our neural network into thinking these are distinct images anyway.

PowerAI Vision provides the possibility of data augmentation and therefore can provide insights for the clinicians facing this problem.

For the first attempt, I was greedy by using flip vertical, flip horizontal and random gaussian options without thinking too much. This led to an additional 12,310 images in all categories and an estimated accuracy of 32%, making the model not to converge.

After a little research, I figured out that random gaussian data augmentation is not very good for my use case. Therefore, for my second attempt at augmenting data, I only used vertical and horizontal flipping, generating a dataset of an additional 3,154 images, approximate 2.1 GB of images.

This step led to a 92% accuracy model in detecting the type of breast tumor of this kind of images with 40x magnification. This compares very favourably with the accuracy levels of a skilled human practitioner (which is around 95%) and is higher than the typical efficiency level (87%) for those practitioners who are ‘very active’ (20-30 cases every day).

This proves that using the GoogLeNet-CAM model within IBM PowerAI Vision for this use case was very efficient and successful.

I believe using Artificial Intelligence to Augment Human Intelligence in oncology has huge potential to help clinicians save many lives.


The subject matter discussed in this blog describes one of many avenues that IBM’s research professionals explore and is not intended to imply and/or expand on any current or future uses of any IBM or third-party products that are announced or commercially available, or otherwise imply that IBM intends to continue this research or make any products or services related to the research commercially available any time in the future.

3 comments on"Breast Cancer Classification with IBM PowerAI Vision"

  1. Sunny Panjabi May 25, 2018

    Excellent use case. May I know the training time the model used? Thanks

    • SrinivasChitiveli June 07, 2018

      <30 minutes for training time.
      The entire process including data labeling, augmenting, training and deployment < 60 mins.

  2. We have developed an app on top of power AI . We developed the models for MRI as well.
    Can we contact and discuss how to enlarge our practice and possibility to include your model in application and present it on the conference. If you do not mind.

Join The Discussion

Your email address will not be published. Required fields are marked *