Explainable deep neural networks for novel viral genome prediction
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021..
Viral infection causes a wide variety of human diseases including cancer and COVID-19. Viruses invade host cells and associate with host molecules, potentially disrupting the normal function of hosts that leads to fatal diseases. Novel viral genome prediction is crucial for understanding the complex viral diseases like AIDS and Ebola. While most existing computational techniques classify viral genomes, the efficiency of the classification depends solely on the structural features extracted. The state-of-the-art DNN models achieved excellent performance by automatic extraction of classification features, but the degree of model explainability is relatively poor. During model training for viral prediction, proposed CNN, CNN-LSTM based methods (EdeepVPP, EdeepVPP-hybrid) automatically extracts features. EdeepVPP also performs model interpretability in order to extract the most important patterns that cause viral genomes through learned filters. It is an interpretable CNN model that extracts vital biologically relevant patterns (features) from feature maps of viral sequences. The EdeepVPP-hybrid predictor outperforms all the existing methods by achieving 0.992 mean AUC-ROC and 0.990 AUC-PR on 19 human metagenomic contig experiment datasets using 10-fold cross-validation. We evaluate the ability of CNN filters to detect patterns across high average activation values. To further asses the robustness of EdeepVPP model, we perform leave-one-experiment-out cross-validation. It can work as a recommendation system to further analyze the raw sequences labeled as 'unknown' by alignment-based methods. We show that our interpretable model can extract patterns that are considered to be the most important features for predicting virus sequences through learned filters.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2022 |
---|---|
Erschienen: |
2022 |
Enthalten in: |
Zur Gesamtaufnahme - volume:52 |
---|---|
Enthalten in: |
Applied intelligence (Dordrecht, Netherlands) - 52(2022), 3 vom: 04., Seite 3002-3017 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Dasari, Chandra Mohan [VerfasserIn] |
---|
Links: |
---|
Themen: |
Convolution neural network |
---|
Anmerkungen: |
Date Revised 16.07.2022 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.1007/s10489-021-02572-3 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM333025008 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM333025008 | ||
003 | DE-627 | ||
005 | 20231225220804.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s10489-021-02572-3 |2 doi | |
028 | 5 | 2 | |a pubmed24n1110.xml |
035 | |a (DE-627)NLM333025008 | ||
035 | |a (NLM)34764607 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Dasari, Chandra Mohan |e verfasserin |4 aut | |
245 | 1 | 0 | |a Explainable deep neural networks for novel viral genome prediction |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Revised 16.07.2022 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021. | ||
520 | |a Viral infection causes a wide variety of human diseases including cancer and COVID-19. Viruses invade host cells and associate with host molecules, potentially disrupting the normal function of hosts that leads to fatal diseases. Novel viral genome prediction is crucial for understanding the complex viral diseases like AIDS and Ebola. While most existing computational techniques classify viral genomes, the efficiency of the classification depends solely on the structural features extracted. The state-of-the-art DNN models achieved excellent performance by automatic extraction of classification features, but the degree of model explainability is relatively poor. During model training for viral prediction, proposed CNN, CNN-LSTM based methods (EdeepVPP, EdeepVPP-hybrid) automatically extracts features. EdeepVPP also performs model interpretability in order to extract the most important patterns that cause viral genomes through learned filters. It is an interpretable CNN model that extracts vital biologically relevant patterns (features) from feature maps of viral sequences. The EdeepVPP-hybrid predictor outperforms all the existing methods by achieving 0.992 mean AUC-ROC and 0.990 AUC-PR on 19 human metagenomic contig experiment datasets using 10-fold cross-validation. We evaluate the ability of CNN filters to detect patterns across high average activation values. To further asses the robustness of EdeepVPP model, we perform leave-one-experiment-out cross-validation. It can work as a recommendation system to further analyze the raw sequences labeled as 'unknown' by alignment-based methods. We show that our interpretable model can extract patterns that are considered to be the most important features for predicting virus sequences through learned filters | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Convolution neural network | |
650 | 4 | |a Interpretable | |
650 | 4 | |a Learned filters | |
650 | 4 | |a Motif | |
650 | 4 | |a Splice sites | |
650 | 4 | |a Splicing | |
700 | 1 | |a Bhukya, Raju |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Applied intelligence (Dordrecht, Netherlands) |d 2020 |g 52(2022), 3 vom: 04., Seite 3002-3017 |w (DE-627)NLM333024370 |x 1573-7497 |7 nnns |
773 | 1 | 8 | |g volume:52 |g year:2022 |g number:3 |g day:04 |g pages:3002-3017 |
856 | 4 | 0 | |u http://dx.doi.org/10.1007/s10489-021-02572-3 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 52 |j 2022 |e 3 |b 04 |h 3002-3017 |