OIF-Net : An Optical Flow Registration-Based PET/MR Cross-Modal Interactive Fusion Network for Low-Count Brain PET Image Denoising

The short frames of low-count positron emission tomography (PET) images generally cause high levels of statistical noise. Thus, improving the quality of low-count images by using image postprocessing algorithms to achieve better clinical diagnoses has attracted widespread attention in the medical imaging community. Most existing deep learning-based low-count PET image enhancement methods have achieved satisfying results, however, few of them focus on denoising low-count PET images with the magnetic resonance (MR) image modality as guidance. The prior context features contained in MR images can provide abundant and complementary information for single low-count PET image denoising, especially in ultralow-count (2.5%) cases. To this end, we propose a novel two-stream dual PET/MR cross-modal interactive fusion network with an optical flow pre-alignment module, namely, OIF-Net. Specifically, the learnable optical flow registration module enables the spatial manipulation of MR imaging inputs within the network without any extra training supervision. Registered MR images fundamentally solve the problem of feature misalignment in the multimodal fusion stage, which greatly benefits the subsequent denoising process. In addition, we design a spatial-channel feature enhancement module (SC-FEM) that considers the interactive impacts of multiple modalities and provides additional information flexibility in both the spatial and channel dimensions. Furthermore, instead of simply concatenating two extracted features from these two modalities as an intermediate fusion method, the proposed cross-modal feature fusion module (CM-FFM) adopts cross-attention at multiple feature levels and greatly improves the two modalities' feature fusion procedure. Extensive experimental assessments conducted on real clinical datasets, as well as an independent clinical testing dataset, demonstrate that the proposed OIF-Net outperforms the state-of-the-art methods.

Medienart:

E-Artikel

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

Zur Gesamtaufnahme - volume:43

Enthalten in:

IEEE transactions on medical imaging - 43(2024), 4 vom: 23. Apr., Seite 1554-1567

Sprache:

Englisch

Beteiligte Personen:

Fu, Minghan [VerfasserIn]
Zhang, Na [VerfasserIn]
Huang, Zhenxing [VerfasserIn]
Zhou, Chao [VerfasserIn]
Zhang, Xu [VerfasserIn]
Yuan, Jianmin [VerfasserIn]
He, Qiang [VerfasserIn]
Yang, Yongfeng [VerfasserIn]
Zheng, Hairong [VerfasserIn]
Liang, Dong [VerfasserIn]
Wu, Fang-Xiang [VerfasserIn]
Fan, Wei [VerfasserIn]
Hu, Zhanli [VerfasserIn]

Links:

Volltext

Themen:

Journal Article

Anmerkungen:

Date Completed 04.04.2024

Date Revised 04.04.2024

published: Print-Electronic

Citation Status MEDLINE

doi:

10.1109/TMI.2023.3342809

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM365867993