TT U-Net : Temporal Transformer U-Net for Motion Artifact Reduction Using PAD (Pseudo All-Phase Clinical-Dataset) in Cardiac CT
Involuntary motion of the heart remains a challenge for cardiac computed tomography (CT) imaging. Although the electrocardiogram (ECG) gating strategy is widely adopted to perform CT scans at the quasi-quiescent cardiac phase, motion-induced artifacts are still unavoidable for patients with high heart rates or irregular rhythms. Dynamic cardiac CT, which provides functional information of the heart, suffers even more severe motion artifacts. In this paper, we develop a deep learning based framework for motion artifact reduction in dynamic cardiac CT. First, we build a PAD (Pseudo All-phase clinical-Dataset) based on a whole-heart motion model and single-phase cardiac CT images. This dataset provides dynamic CT images with realistic-looking motion artifacts that help to develop data-driven approaches. Second, we formulate the problem of motion artifact reduction as a video deblurring task according to its dynamic nature. A novel TT U-Net (Temporal Transformer U-Net) is proposed to excavate the spatiotemporal features for better motion artifact reduction. The self-attention mechanism along the temporal dimension effectively encodes motion information and thus aids image recovery. Experiments show that the TT U-Net trained on the proposed PAD performs well on clinical CT scans, which substantiates the effectiveness and fine generalization ability of our method. The source code, trained models, and dynamic demo will be available at https://github.com/ivy9092111111/TT-U-Net.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:42 |
---|---|
Enthalten in: |
IEEE transactions on medical imaging - 42(2023), 12 vom: 31. Dez., Seite 3805-3816 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Deng, Ziheng [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 01.12.2023 Date Revised 01.12.2023 published: Print-Electronic Citation Status MEDLINE |
---|
doi: |
10.1109/TMI.2023.3310933 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM361493266 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM361493266 | ||
003 | DE-627 | ||
005 | 20231226085244.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TMI.2023.3310933 |2 doi | |
028 | 5 | 2 | |a pubmed24n1204.xml |
035 | |a (DE-627)NLM361493266 | ||
035 | |a (NLM)37651491 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Deng, Ziheng |e verfasserin |4 aut | |
245 | 1 | 0 | |a TT U-Net |b Temporal Transformer U-Net for Motion Artifact Reduction Using PAD (Pseudo All-Phase Clinical-Dataset) in Cardiac CT |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 01.12.2023 | ||
500 | |a Date Revised 01.12.2023 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a Involuntary motion of the heart remains a challenge for cardiac computed tomography (CT) imaging. Although the electrocardiogram (ECG) gating strategy is widely adopted to perform CT scans at the quasi-quiescent cardiac phase, motion-induced artifacts are still unavoidable for patients with high heart rates or irregular rhythms. Dynamic cardiac CT, which provides functional information of the heart, suffers even more severe motion artifacts. In this paper, we develop a deep learning based framework for motion artifact reduction in dynamic cardiac CT. First, we build a PAD (Pseudo All-phase clinical-Dataset) based on a whole-heart motion model and single-phase cardiac CT images. This dataset provides dynamic CT images with realistic-looking motion artifacts that help to develop data-driven approaches. Second, we formulate the problem of motion artifact reduction as a video deblurring task according to its dynamic nature. A novel TT U-Net (Temporal Transformer U-Net) is proposed to excavate the spatiotemporal features for better motion artifact reduction. The self-attention mechanism along the temporal dimension effectively encodes motion information and thus aids image recovery. Experiments show that the TT U-Net trained on the proposed PAD performs well on clinical CT scans, which substantiates the effectiveness and fine generalization ability of our method. The source code, trained models, and dynamic demo will be available at https://github.com/ivy9092111111/TT-U-Net | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Zhang, Weikang |e verfasserin |4 aut | |
700 | 1 | |a Chen, Kaile |e verfasserin |4 aut | |
700 | 1 | |a Zhou, Yufu |e verfasserin |4 aut | |
700 | 1 | |a Tian, Jiao |e verfasserin |4 aut | |
700 | 1 | |a Quan, Guotao |e verfasserin |4 aut | |
700 | 1 | |a Zhao, Jun |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on medical imaging |d 1982 |g 42(2023), 12 vom: 31. Dez., Seite 3805-3816 |w (DE-627)NLM082855269 |x 1558-254X |7 nnns |
773 | 1 | 8 | |g volume:42 |g year:2023 |g number:12 |g day:31 |g month:12 |g pages:3805-3816 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TMI.2023.3310933 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 42 |j 2023 |e 12 |b 31 |c 12 |h 3805-3816 |