Deep Learning May Play a Role in Assessing Breast Texture

Research explores use of Deep Learning in assessing breast cancer risk

Can a computer network that mimics the neural structure of the brain and the visual cortex and is trained to analyze and recognize nonmedical images (deep learning), assess breast texture — and therefore risk of breast cancer — more accurately than standard radiographic texture analysis?

In a study presented during the Hot Topics in Breast Imaging series at RSNA 2016, researchers determined that convolutional neural networks can analyze full-field digital mammographic (FFDM) images and extract features that are missed both by human eyes and by other types of computer analysis.

“I think that in the future, both texture analysis and deep learning will be applied to mammograms on a routine basis,” said Maryellen Giger, PhD, A.N. Pritzker Professor of Radiology at the University of Chicago.

Breast cancer is the second leading cause of death in North America for women. Currently, mammography is an effective tool for early breast cancer detection and the reduction of mortality rates. Breast density and mammographic parenchymal patterns can both be useful in assessing the risk of developing breast cancer. Better risk assessment allows physicians to better manage patients and can potentially lead to personalized screening regimens and precision medicine.

Previous work by the Giger Lab at the University of Chicago suggests that parenchymal texture predicts cancer risk more accurately than breast density percentage. A 2014 study published by Dr. Giger and Hui Li, MD, and colleagues in the Journal of Medical Imaging used radiographic texture analysis to compare a low-risk population with two high-risk populations (women with BRCA 1 or 2 and women with unilateral breast cancer). The high-risk group had coarser and lower contrast parenchymal patterns than the control group, even though the breast density percentage was not significantly different between the two groups.

The retrospective study compared radiomic texture analysis (RTA) with a convolutional neural network, “AlexNet,” that had been pre-trained on a library of 1.28 million non-medical images from ImageNet, a large database intended to provide raw material for training visual object recognition software.

The University of Chicago study included 456 clinical FFDM cases from two high-risk groups, BRCA1/2 gene-mutation carriers (53 cases) and unilateral cancer patients (75 cases), and a low-risk group (328 cases). Regions of interest of 256 x 256 pixels were selected from the central breast region behind the nipple in the craniocaudal projection, a location that usually includes the densest part of the breast.

The study compared the use of imaging features, which were automatically extracted using pre-trained convolutional neural networks with transfer learning, and the use of features from radiographic texture analysis.

The convolutional neural network was pre-trained using a database of 1.2 million high-resolution images in about a thousand categories that include animals, modes of transportation and microscopic images, in addition to standard medical images. The area under the receiver operating characteristic (ROC) curve served as the figure of merit in the task of distinguishing between high-risk and low-risk subjects.

The group’s analysis showed that the neural network performed similarly to radiographic texture analysis in distinguishing between low-risk and high-risk individuals. When both methods were used together, there was statistically significant improvement in distinguishing the two risk groups.

“Deep learning has potential to help clinicians in assessing mammographic parenchymal patterns for breast cancer risk assessment,” the researchers concluded.

Dr. Giger plans to continue research on neural networks.

Researchers presented a study showing that deep learning has the potential to help clinicians in assessing mammographic parenchymal patterns for breast cancer risk assessment.
Researchers presented a study showing that deep learning has the potential to help clinicians in assessing mammographic parenchymal patterns for breast cancer risk assessment.