AI Medical Imaging Analysis

A library for Breast Density Classification on MG scans

Project Details

  • Type: Project
  • What We Did: Customized AI Library
  • Tools Used: Pytorch, C++, openCV
  • Started on: March 2021
  • Status: On Going

Customer Problem

Breast density classification according to BIRADS scale starting from unprocessed mammographic images.

Our Solution

The dataset consists of a list of 16-bit grayscale input images (0 - 65535), which correspond to a list of classes belonging to the set {A, B, C, D}

The images were created in different stages, from the set of images about 10% of images were extracted in proportion from all sets corresponding to the extraction stages.

These test images were extracted sequentially, in order to avoid bias in the test phase of the model it is necessary to avoid extracting the test images randomly.

The density classes were manually detected by experienced radiologists.

Each image provided was analyzed to exclude from the training and test samples images that include:

– Clip

– Repere

– Masses removed

– Prosthetics

– Operated breasts

– Enlargements

– Views other than CC and MLO

– Inconsistent classification (e.g. AA)

The model used for this task is a residual classification network or ResNet, which in its evolutions remains a state of the art regarding classification tasks on simple datasets. Since for the production of the model you need performance both in terms of accuracy and in terms of time, after various experiments the ResNet variant chosen for the resolution of this task is ResNet-18, that is a convolution network composed of 18 cascading layers.

The weights of each convolution layer of the model are initialized using the weights of a pre-trained model on the ImageNet dataset. However, using a standard learning rate there are no relevant differences compared to initializing the weights in other ways.

Since the model was pre-trained on RGB images, an average operation was performed on the second dimension of the weights of the first convolution layer of the model, allowing in fact to provide single-channel images in input to the model.

Percentage of samples for which the model assigned the correct class by observing the first two classes with higher probability: Validation: 98.7%, Test: 97.99%

image

image

image