Reliable interpretability of biology-inspired deep neural networks

Abstract Deep neural networks display impressive performance but suffer from limited interpretability. Biology-inspired deep learning, where the architecture of the computational graph is based on biological knowledge, enables unique interpretability where real-world concepts are encoded in hidden nodes, which can be ranked by importance and thereby interpreted. In such models trained on single-cell transcriptomes, we previously demonstrated that node-level interpretations lack robustness upon repeated training and are influenced by biases in biological knowledge. Similar studies are missing for related models. Here, we test and extend our methodology for reliable interpretability in P-NET, a biology-inspired model trained on patient mutation data. We observe variability of interpretations and susceptibility to knowledge biases, and identify the network properties that drive interpretation biases. We further present an approach to control the robustness and biases of interpretations, which leads to more specific interpretations. In summary, our study reveals the broad importance of methods to ensure robust and bias-aware interpretability in biology-inspired deep learning..

Medienart:

Preprint

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

bioRxiv.org - (2024) vom: 23. Apr. Zur Gesamtaufnahme - year:2024

Sprache:

Englisch

Beteiligte Personen:

Esser-Skala, Wolfgang [VerfasserIn]
Fortelny, Nikolaus [VerfasserIn]

Links:

Volltext [lizenzpflichtig]
Volltext [kostenfrei]

Themen:

570
Biology

doi:

10.1101/2023.07.17.549297

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

XBI040243524