Reliable interpretability of biology-inspired deep neural networks

© 2023. Springer Nature Limited..

Deep neural networks display impressive performance but suffer from limited interpretability. Biology-inspired deep learning, where the architecture of the computational graph is based on biological knowledge, enables unique interpretability where real-world concepts are encoded in hidden nodes, which can be ranked by importance and thereby interpreted. In such models trained on single-cell transcriptomes, we previously demonstrated that node-level interpretations lack robustness upon repeated training and are influenced by biases in biological knowledge. Similar studies are missing for related models. Here, we test and extend our methodology for reliable interpretability in P-NET, a biology-inspired model trained on patient mutation data. We observe variability of interpretations and susceptibility to knowledge biases, and identify the network properties that drive interpretation biases. We further present an approach to control the robustness and biases of interpretations, which leads to more specific interpretations. In summary, our study reveals the broad importance of methods to ensure robust and bias-aware interpretability in biology-inspired deep learning.

Medienart:

E-Artikel

Erscheinungsjahr:

2023

Erschienen:

2023

Enthalten in:

Zur Gesamtaufnahme - volume:9

Enthalten in:

NPJ systems biology and applications - 9(2023), 1 vom: 10. Okt., Seite 50

Sprache:

Englisch

Beteiligte Personen:

Esser-Skala, Wolfgang [VerfasserIn]
Fortelny, Nikolaus [VerfasserIn]

Links:

Volltext

Themen:

Journal Article

Anmerkungen:

Date Completed 01.11.2023

Date Revised 21.11.2023

published: Electronic

Citation Status MEDLINE

doi:

10.1038/s41540-023-00310-8

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM363099093