Integrating Domain Knowledge Into Deep Networks for Lung Ultrasound With Applications to COVID-19
Lung ultrasound (LUS) is a cheap, safe and non-invasive imaging modality that can be performed at patient bed-side. However, to date LUS is not widely adopted due to lack of trained personnel required for interpreting the acquired LUS frames. In this work we propose a framework for training deep artificial neural networks for interpreting LUS, which may promote broader use of LUS. When using LUS to evaluate a patient's condition, both anatomical phenomena (e.g., the pleural line, presence of consolidations), as well as sonographic artifacts (such as A- and B-lines) are of importance. In our framework, we integrate domain knowledge into deep neural networks by inputting anatomical features and LUS artifacts in the form of additional channels containing pleural and vertical artifacts masks along with the raw LUS frames. By explicitly supplying this domain knowledge, standard off-the-shelf neural networks can be rapidly and efficiently finetuned to accomplish various tasks on LUS data, such as frame classification or semantic segmentation. Our framework allows for a unified treatment of LUS frames captured by either convex or linear probes. We evaluated our proposed framework on the task of COVID-19 severity assessment using the ICLUS dataset. In particular, we finetuned simple image classification models to predict per-frame COVID-19 severity score. We also trained a semantic segmentation model to predict per-pixel COVID-19 severity annotations. Using the combined raw LUS frames and the detected lines for both tasks, our off-the-shelf models performed better than complicated models specifically designed for these tasks, exemplifying the efficacy of our framework.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2022 |
---|---|
Erschienen: |
2022 |
Enthalten in: |
Zur Gesamtaufnahme - volume:41 |
---|---|
Enthalten in: |
IEEE transactions on medical imaging - 41(2022), 3 vom: 04. März, Seite 571-581 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Frank, Oz [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 08.03.2022 Date Revised 16.07.2022 published: Print-Electronic Citation Status MEDLINE |
---|
doi: |
10.1109/TMI.2021.3117246 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM331477092 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM331477092 | ||
003 | DE-627 | ||
005 | 20231225213541.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TMI.2021.3117246 |2 doi | |
028 | 5 | 2 | |a pubmed24n1104.xml |
035 | |a (DE-627)NLM331477092 | ||
035 | |a (NLM)34606447 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Frank, Oz |e verfasserin |4 aut | |
245 | 1 | 0 | |a Integrating Domain Knowledge Into Deep Networks for Lung Ultrasound With Applications to COVID-19 |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 08.03.2022 | ||
500 | |a Date Revised 16.07.2022 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a Lung ultrasound (LUS) is a cheap, safe and non-invasive imaging modality that can be performed at patient bed-side. However, to date LUS is not widely adopted due to lack of trained personnel required for interpreting the acquired LUS frames. In this work we propose a framework for training deep artificial neural networks for interpreting LUS, which may promote broader use of LUS. When using LUS to evaluate a patient's condition, both anatomical phenomena (e.g., the pleural line, presence of consolidations), as well as sonographic artifacts (such as A- and B-lines) are of importance. In our framework, we integrate domain knowledge into deep neural networks by inputting anatomical features and LUS artifacts in the form of additional channels containing pleural and vertical artifacts masks along with the raw LUS frames. By explicitly supplying this domain knowledge, standard off-the-shelf neural networks can be rapidly and efficiently finetuned to accomplish various tasks on LUS data, such as frame classification or semantic segmentation. Our framework allows for a unified treatment of LUS frames captured by either convex or linear probes. We evaluated our proposed framework on the task of COVID-19 severity assessment using the ICLUS dataset. In particular, we finetuned simple image classification models to predict per-frame COVID-19 severity score. We also trained a semantic segmentation model to predict per-pixel COVID-19 severity annotations. Using the combined raw LUS frames and the detected lines for both tasks, our off-the-shelf models performed better than complicated models specifically designed for these tasks, exemplifying the efficacy of our framework | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Research Support, Non-U.S. Gov't | |
700 | 1 | |a Schipper, Nir |e verfasserin |4 aut | |
700 | 1 | |a Vaturi, Mordehay |e verfasserin |4 aut | |
700 | 1 | |a Soldati, Gino |e verfasserin |4 aut | |
700 | 1 | |a Smargiassi, Andrea |e verfasserin |4 aut | |
700 | 1 | |a Inchingolo, Riccardo |e verfasserin |4 aut | |
700 | 1 | |a Torri, Elena |e verfasserin |4 aut | |
700 | 1 | |a Perrone, Tiziano |e verfasserin |4 aut | |
700 | 1 | |a Mento, Federico |e verfasserin |4 aut | |
700 | 1 | |a Demi, Libertario |e verfasserin |4 aut | |
700 | 1 | |a Galun, Meirav |e verfasserin |4 aut | |
700 | 1 | |a Eldar, Yonina C |e verfasserin |4 aut | |
700 | 1 | |a Bagon, Shai |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on medical imaging |d 1982 |g 41(2022), 3 vom: 04. März, Seite 571-581 |w (DE-627)NLM082855269 |x 1558-254X |7 nnns |
773 | 1 | 8 | |g volume:41 |g year:2022 |g number:3 |g day:04 |g month:03 |g pages:571-581 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TMI.2021.3117246 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 41 |j 2022 |e 3 |b 04 |c 03 |h 571-581 |