An Explainable-AI approach for Diagnosis of COVID-19 using MALDI-ToF Mass Spectrometry
The severe acute respiratory syndrome coronavirus type-2 (SARS-CoV-2) caused a global pandemic and immensely affected the global economy. Accurate, cost-effective, and quick tests have proven substantial in identifying infected people and mitigating the spread. Recently, multiple alternative platforms for testing coronavirus disease 2019 (COVID-19) have been published that show high agreement with current gold standard real-time polymerase chain reaction (RT-PCR) results. These new methods do away with nasopharyngeal (NP) swabs, eliminate the need for complicated reagents, and reduce the burden on RT-PCR test reagent supply. In the present work, we have designed an artificial intelligence-based (AI) testing method to provide confidence in the results. Current AI applications for COVID-19 studies often lack a biological foundation in the decision-making process, and our AI approach is one of the earliest to leverage explainable AI (X-AI) algorithms for COVID-19 diagnosis using mass spectrometry. Here, we have employed X-AI to explain the decision-making process on a local (per-sample) and global (all samples) basis underscored by biologically relevant features. We evaluated our technique with data extracted from human gargle samples and achieved a testing accuracy of 94.12%. Such techniques would strengthen the relationship between AI and clinical diagnostics by providing biomedical researchers and healthcare workers with trustworthy and, most importantly, explainable test results.
Medienart: |
Preprint |
---|
Erscheinungsjahr: |
2021 |
---|---|
Erschienen: |
2021 |
Enthalten in: |
arXiv.org - (2021) vom: 28. Sept. Zur Gesamtaufnahme - year:2021 |
---|
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Seethi, Venkata Devesh Reddy [VerfasserIn] |
---|
Links: |
Volltext [kostenfrei] |
---|
Themen: |
000 |
---|
Förderinstitution / Projekttitel: |
|
---|
PPN (Katalog-ID): |
XAR032687273 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | XAR032687273 | ||
003 | DE-627 | ||
005 | 20230525080538.0 | ||
007 | cr uuu---uuuuu | ||
008 | 210930s2021 xx |||||o 00| ||eng c | ||
035 | |a (DE-627)XAR032687273 | ||
035 | |a (arXiv)2109.14099 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Seethi, Venkata Devesh Reddy |e verfasserin |4 aut | |
245 | 1 | 0 | |a An Explainable-AI approach for Diagnosis of COVID-19 using MALDI-ToF Mass Spectrometry |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a The severe acute respiratory syndrome coronavirus type-2 (SARS-CoV-2) caused a global pandemic and immensely affected the global economy. Accurate, cost-effective, and quick tests have proven substantial in identifying infected people and mitigating the spread. Recently, multiple alternative platforms for testing coronavirus disease 2019 (COVID-19) have been published that show high agreement with current gold standard real-time polymerase chain reaction (RT-PCR) results. These new methods do away with nasopharyngeal (NP) swabs, eliminate the need for complicated reagents, and reduce the burden on RT-PCR test reagent supply. In the present work, we have designed an artificial intelligence-based (AI) testing method to provide confidence in the results. Current AI applications for COVID-19 studies often lack a biological foundation in the decision-making process, and our AI approach is one of the earliest to leverage explainable AI (X-AI) algorithms for COVID-19 diagnosis using mass spectrometry. Here, we have employed X-AI to explain the decision-making process on a local (per-sample) and global (all samples) basis underscored by biologically relevant features. We evaluated our technique with data extracted from human gargle samples and achieved a testing accuracy of 94.12%. Such techniques would strengthen the relationship between AI and clinical diagnostics by providing biomedical researchers and healthcare workers with trustworthy and, most importantly, explainable test results | ||
650 | 4 | |a Computer Science - Machine Learning |7 (dpeaa)DE-84 | |
650 | 4 | |a Computer Science - Artificial Intelligence |7 (dpeaa)DE-84 | |
650 | 4 | |a 000 |7 (dpeaa)DE-84 | |
700 | 1 | |a LaCasse, Zane |4 aut | |
700 | 1 | |a Chivte, Prajkta |4 aut | |
700 | 1 | |a Bland, Joshua |4 aut | |
700 | 1 | |a Kadkol, Shrihari S. |4 aut | |
700 | 1 | |a Gaillard, Elizabeth R. |4 aut | |
700 | 1 | |a Bharti, Pratool |4 aut | |
700 | 1 | |a Alhoori, Hamed |4 aut | |
773 | 0 | 8 | |i Enthalten in |t arXiv.org |g (2021) vom: 28. Sept. |
773 | 1 | 8 | |g year:2021 |g day:28 |g month:09 |
856 | 4 | 0 | |u https://arxiv.org/abs/2109.14099 |z kostenfrei |3 Volltext |
912 | |a GBV_XAR | ||
951 | |a AR | ||
952 | |j 2021 |b 28 |c 09 |