OIF-Net : An Optical Flow Registration-Based PET/MR Cross-Modal Interactive Fusion Network for Low-Count Brain PET Image Denoising
The short frames of low-count positron emission tomography (PET) images generally cause high levels of statistical noise. Thus, improving the quality of low-count images by using image postprocessing algorithms to achieve better clinical diagnoses has attracted widespread attention in the medical imaging community. Most existing deep learning-based low-count PET image enhancement methods have achieved satisfying results, however, few of them focus on denoising low-count PET images with the magnetic resonance (MR) image modality as guidance. The prior context features contained in MR images can provide abundant and complementary information for single low-count PET image denoising, especially in ultralow-count (2.5%) cases. To this end, we propose a novel two-stream dual PET/MR cross-modal interactive fusion network with an optical flow pre-alignment module, namely, OIF-Net. Specifically, the learnable optical flow registration module enables the spatial manipulation of MR imaging inputs within the network without any extra training supervision. Registered MR images fundamentally solve the problem of feature misalignment in the multimodal fusion stage, which greatly benefits the subsequent denoising process. In addition, we design a spatial-channel feature enhancement module (SC-FEM) that considers the interactive impacts of multiple modalities and provides additional information flexibility in both the spatial and channel dimensions. Furthermore, instead of simply concatenating two extracted features from these two modalities as an intermediate fusion method, the proposed cross-modal feature fusion module (CM-FFM) adopts cross-attention at multiple feature levels and greatly improves the two modalities' feature fusion procedure. Extensive experimental assessments conducted on real clinical datasets, as well as an independent clinical testing dataset, demonstrate that the proposed OIF-Net outperforms the state-of-the-art methods.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2024 |
---|---|
Erschienen: |
2024 |
Enthalten in: |
Zur Gesamtaufnahme - volume:43 |
---|---|
Enthalten in: |
IEEE transactions on medical imaging - 43(2024), 4 vom: 23. Apr., Seite 1554-1567 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Fu, Minghan [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 04.04.2024 Date Revised 04.04.2024 published: Print-Electronic Citation Status MEDLINE |
---|
doi: |
10.1109/TMI.2023.3342809 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM365867993 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | NLM365867993 | ||
003 | DE-627 | ||
005 | 20240404234458.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TMI.2023.3342809 |2 doi | |
028 | 5 | 2 | |a pubmed24n1364.xml |
035 | |a (DE-627)NLM365867993 | ||
035 | |a (NLM)38096101 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Fu, Minghan |e verfasserin |4 aut | |
245 | 1 | 0 | |a OIF-Net |b An Optical Flow Registration-Based PET/MR Cross-Modal Interactive Fusion Network for Low-Count Brain PET Image Denoising |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 04.04.2024 | ||
500 | |a Date Revised 04.04.2024 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a The short frames of low-count positron emission tomography (PET) images generally cause high levels of statistical noise. Thus, improving the quality of low-count images by using image postprocessing algorithms to achieve better clinical diagnoses has attracted widespread attention in the medical imaging community. Most existing deep learning-based low-count PET image enhancement methods have achieved satisfying results, however, few of them focus on denoising low-count PET images with the magnetic resonance (MR) image modality as guidance. The prior context features contained in MR images can provide abundant and complementary information for single low-count PET image denoising, especially in ultralow-count (2.5%) cases. To this end, we propose a novel two-stream dual PET/MR cross-modal interactive fusion network with an optical flow pre-alignment module, namely, OIF-Net. Specifically, the learnable optical flow registration module enables the spatial manipulation of MR imaging inputs within the network without any extra training supervision. Registered MR images fundamentally solve the problem of feature misalignment in the multimodal fusion stage, which greatly benefits the subsequent denoising process. In addition, we design a spatial-channel feature enhancement module (SC-FEM) that considers the interactive impacts of multiple modalities and provides additional information flexibility in both the spatial and channel dimensions. Furthermore, instead of simply concatenating two extracted features from these two modalities as an intermediate fusion method, the proposed cross-modal feature fusion module (CM-FFM) adopts cross-attention at multiple feature levels and greatly improves the two modalities' feature fusion procedure. Extensive experimental assessments conducted on real clinical datasets, as well as an independent clinical testing dataset, demonstrate that the proposed OIF-Net outperforms the state-of-the-art methods | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Zhang, Na |e verfasserin |4 aut | |
700 | 1 | |a Huang, Zhenxing |e verfasserin |4 aut | |
700 | 1 | |a Zhou, Chao |e verfasserin |4 aut | |
700 | 1 | |a Zhang, Xu |e verfasserin |4 aut | |
700 | 1 | |a Yuan, Jianmin |e verfasserin |4 aut | |
700 | 1 | |a He, Qiang |e verfasserin |4 aut | |
700 | 1 | |a Yang, Yongfeng |e verfasserin |4 aut | |
700 | 1 | |a Zheng, Hairong |e verfasserin |4 aut | |
700 | 1 | |a Liang, Dong |e verfasserin |4 aut | |
700 | 1 | |a Wu, Fang-Xiang |e verfasserin |4 aut | |
700 | 1 | |a Fan, Wei |e verfasserin |4 aut | |
700 | 1 | |a Hu, Zhanli |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on medical imaging |d 1982 |g 43(2024), 4 vom: 23. Apr., Seite 1554-1567 |w (DE-627)NLM082855269 |x 1558-254X |7 nnns |
773 | 1 | 8 | |g volume:43 |g year:2024 |g number:4 |g day:23 |g month:04 |g pages:1554-1567 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TMI.2023.3342809 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 43 |j 2024 |e 4 |b 23 |c 04 |h 1554-1567 |