Explainable deep neural networks for novel viral genome prediction

Abstract Viral infection causes a wide variety of human diseases including cancer and COVID-19. Viruses invade host cells and associate with host molecules, potentially disrupting the normal function of hosts that leads to fatal diseases. Novel viral genome prediction is crucial for understanding the complex viral diseases like AIDS and Ebola. While most existing computational techniques classify viral genomes, the efficiency of the classification depends solely on the structural features extracted. The state-of-the-art DNN models achieved excellent performance by automatic extraction of classification features, but the degree of model explainability is relatively poor. During model training for viral prediction, proposed CNN, CNN-LSTM based methods (EdeepVPP, EdeepVPP-hybrid) automatically extracts features. EdeepVPP also performs model interpretability in order to extract the most important patterns that cause viral genomes through learned filters. It is an interpretable CNN model that extracts vital biologically relevant patterns (features) from feature maps of viral sequences. The EdeepVPP-hybrid predictor outperforms all the existing methods by achieving 0.992 mean AUC-ROC and 0.990 AUC-PR on 19 human metagenomic contig experiment datasets using 10-fold cross-validation. We evaluate the ability of CNN filters to detect patterns across high average activation values. To further asses the robustness of EdeepVPP model, we perform leave-one-experiment-out cross-validation. It can work as a recommendation system to further analyze the raw sequences labeled as ‘unknown’ by alignment-based methods. We show that our interpretable model can extract patterns that are considered to be the most important features for predicting virus sequences through learned filters..

Medienart:

E-Artikel

Erscheinungsjahr:

2021

Erschienen:

2021

Enthalten in:

Zur Gesamtaufnahme - volume:52

Enthalten in:

Applied intelligence - 52(2021), 3 vom: 25. Juni, Seite 3002-3017

Sprache:

Englisch

Beteiligte Personen:

Dasari, Chandra Mohan [VerfasserIn]
Bhukya, Raju [VerfasserIn]

Links:

Volltext [lizenzpflichtig]

BKL:

54.72$jKünstliche Intelligenz

30.20$jNichtlineare Dynamik

Themen:

Convolution neural network
Interpretable
Learned filters
Motif
Splice sites
Splicing

RVK:

RVK Klassifikation

Anmerkungen:

© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021

doi:

10.1007/s10489-021-02572-3

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

OLC2129367836