A multi-stage fusion framework to classify breast lesions using deep learning and radiomics features computed from four-view mammograms

© 2023 American Association of Physicists in Medicine..

BACKGROUND: Developing computer aided diagnosis (CAD) schemes of mammograms to classify between malignant and benign breast lesions has attracted a lot of research attention over the last several decades. However, unlike radiologists who make diagnostic decisions based on the fusion of image features extracted from multi-view mammograms, most CAD schemes are single-view-based schemes, which limit CAD performance and clinical utility.

PURPOSE: This study aims to develop and test a novel CAD framework that optimally fuses information extracted from ipsilateral views of bilateral mammograms using both deep transfer learning (DTL) and radiomics feature extraction methods.

METHODS: An image dataset containing 353 benign and 611 malignant cases is assembled. Each case contains four images: the craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breast. First, we extract four matching regions of interest (ROIs) from images that surround centers of two suspicious lesion regions seen in CC and MLO views, as well as matching ROIs in the contralateral breasts. Next, the handcrafted radiomics (HCRs) features and VGG16 model-generated automated features are extracted from each ROI resulting in eight feature vectors. Then, after reducing feature dimensionality and quantifying the bilateral and ipsilateral asymmetry of four ROIs to yield four new feature vectors, we test four fusion methods to build three support vector machine (SVM) classifiers by an optimal fusion of asymmetrical image features extracted from four view images.

RESULTS: Using a 10-fold cross-validation method, results show that a SVM classifier trained using an optimal fusion of four view images yields the highest classification performance (AUC = 0.876 ± 0.031), which significantly outperforms SVM classifiers trained using one projection view alone, AUC = 0.817 ± 0.026 and 0.792 ± 0.026 for the CC and MLO view of bilateral mammograms, respectively (p < 0.001).

CONCLUSIONS: The study demonstrates that the shift from single-view CAD to four-view CAD and the inclusion of both DTL and radiomics features significantly increases CAD performance in distinguishing between malignant and benign breast lesions.

Medienart:

E-Artikel

Erscheinungsjahr:

2023

Erschienen:

2023

Enthalten in:

Zur Gesamtaufnahme - volume:50

Enthalten in:

Medical physics - 50(2023), 12 vom: 27. Dez., Seite 7670-7683

Sprache:

Englisch

Beteiligte Personen:

Jones, Meredith A [VerfasserIn]
Sadeghipour, Negar [VerfasserIn]
Chen, Xuxin [VerfasserIn]
Islam, Warid [VerfasserIn]
Zheng, Bin [VerfasserIn]

Links:

Volltext

Themen:

Breast cancer
Breast lesion classification
Computer-aided diagnosis (CAD)
Deep learning
Image feature fusion
Journal Article
Multi-view CAD scheme
Multi-view image feature analysis
Radiomics features

Anmerkungen:

Date Completed 06.12.2023

Date Revised 18.02.2024

published: Print-Electronic

Citation Status MEDLINE

doi:

10.1002/mp.16419

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM355869640