TGMIL : A hybrid multi-instance learning model based on the Transformer and the Graph Attention Network for whole-slide images classification of renal cell carcinoma

Copyright © 2023. Published by Elsevier B.V..

BACKGROUND AND OBJECTIVES: The pathological diagnosis of renal cell carcinoma is crucial for treatment. Currently, the multi-instance learning method is commonly used for whole-slide image classification of renal cell carcinoma, which is mainly based on the assumption of independent identical distribution. But this is inconsistent with the need to consider the correlation between different instances in the diagnosis process. Furthermore, the problem of high resource consumption of pathology images is still urgent to be solved. Therefore, we propose a new multi-instance learning method to solve this problem.

METHODS: In this study, we proposed a hybrid multi-instance learning model based on the Transformer and the Graph Attention Network, called TGMIL, to achieve whole-slide image of renal cell carcinoma classification without pixel-level annotation or region of interest extraction. Our approach is divided into three steps. First, we designed a feature pyramid with the multiple low magnifications of whole-slide image named MMFP. It makes the model incorporates richer information, and reduces memory consumption as well as training time compared to the highest magnification. Second, TGMIL amalgamates the Transformer and the Graph Attention's capabilities, adeptly addressing the loss of instance contextual and spatial. Within the Graph Attention network stream, an easy and efficient approach employing max pooling and mean pooling yields the graph adjacency matrix, devoid of extra memory consumption. Finally, the outputs of two streams of TGMIL are aggregated to achieve the classification of renal cell carcinoma.

RESULTS: On the TCGA-RCC validation set, a public dataset for renal cell carcinoma, the area under a receiver operating characteristic (ROC) curve (AUC) and accuracy of TGMIL were 0.98±0.0015,0.9191±0.0062, respectively. It showcased remarkable proficiency on the private validation set of renal cell carcinoma pathology images, attaining AUC of 0.9386±0.0162 and ACC of 0.9197±0.0124. Furthermore, on the public breast cancer whole-slide image test dataset, CAMELYON 16, our model showed good classification performance with an accuracy of 0.8792.

CONCLUSIONS: TGMIL models the diagnostic process of pathologists and shows good classification performance on multiple datasets. Concurrently, the MMFP module efficiently diminishes resource requirements, offering a novel angle for exploring computational pathology images.

Medienart:

E-Artikel

Erscheinungsjahr:

2023

Erschienen:

2023

Enthalten in:

Zur Gesamtaufnahme - volume:242

Enthalten in:

Computer methods and programs in biomedicine - 242(2023) vom: 01. Dez., Seite 107789

Sprache:

Englisch

Beteiligte Personen:

Sun, Xinhuan [VerfasserIn]
Li, Wuchao [VerfasserIn]
Fu, Bangkang [VerfasserIn]
Peng, Yunsong [VerfasserIn]
He, Junjie [VerfasserIn]
Wang, Lihui [VerfasserIn]
Yang, Tongyin [VerfasserIn]
Meng, Xue [VerfasserIn]
Li, Jin [VerfasserIn]
Wang, Jinjing [VerfasserIn]
Huang, Ping [VerfasserIn]
Wang, Rongpin [VerfasserIn]

Links:

Volltext

Themen:

Graph attention network
Journal Article
Multi-instance learning
Renal cell carcinoma
Transformer
Whole-slide image

Anmerkungen:

Date Completed 14.11.2023

Date Revised 14.11.2023

published: Print-Electronic

Citation Status MEDLINE

doi:

10.1016/j.cmpb.2023.107789

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM362191484