The Price of Explainability in Machine Learning Models for 100-Day Readmission Prediction in Heart Failure : Retrospective, Comparative, Machine Learning Study

©Amira Soliman, Björn Agvall, Kobra Etminani, Omar Hamed, Markus Lingman. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 27.10.2023..

BACKGROUND: Sensitive and interpretable machine learning (ML) models can provide valuable assistance to clinicians in managing patients with heart failure (HF) at discharge by identifying individual factors associated with a high risk of readmission. In this cohort study, we delve into the factors driving the potential utility of classification models as decision support tools for predicting readmissions in patients with HF.

OBJECTIVE: The primary objective of this study is to assess the trade-off between using deep learning (DL) and traditional ML models to identify the risk of 100-day readmissions in patients with HF. Additionally, the study aims to provide explanations for the model predictions by highlighting important features both on a global scale across the patient cohort and on a local level for individual patients.

METHODS: The retrospective data for this study were obtained from the Regional Health Care Information Platform in Region Halland, Sweden. The study cohort consisted of patients diagnosed with HF who were over 40 years old and had been hospitalized at least once between 2017 and 2019. Data analysis encompassed the period from January 1, 2017, to December 31, 2019. Two ML models were developed and validated to predict 100-day readmissions, with a focus on the explainability of the model's decisions. These models were built based on decision trees and recurrent neural architecture. Model explainability was obtained using an ML explainer. The predictive performance of these models was compared against 2 risk assessment tools using multiple performance metrics.

RESULTS: The retrospective data set included a total of 15,612 admissions, and within these admissions, readmission occurred in 5597 cases, representing a readmission rate of 35.85%. It is noteworthy that a traditional and explainable model, informed by clinical knowledge, exhibited performance comparable to the DL model and surpassed conventional scoring methods in predicting readmission among patients with HF. The evaluation of predictive model performance was based on commonly used metrics, with an area under the precision-recall curve of 66% for the deep model and 68% for the traditional model on the holdout data set. Importantly, the explanations provided by the traditional model offer actionable insights that have the potential to enhance care planning.

CONCLUSIONS: This study found that a widely used deep prediction model did not outperform an explainable ML model when predicting readmissions among patients with HF. The results suggest that model transparency does not necessarily compromise performance, which could facilitate the clinical adoption of such models.

Medienart:

E-Artikel

Erscheinungsjahr:

2023

Erschienen:

2023

Enthalten in:

Zur Gesamtaufnahme - volume:25

Enthalten in:

Journal of medical Internet research - 25(2023) vom: 27. Okt., Seite e46934

Sprache:

Englisch

Beteiligte Personen:

Soliman, Amira [VerfasserIn]
Agvall, Björn [VerfasserIn]
Etminani, Kobra [VerfasserIn]
Hamed, Omar [VerfasserIn]
Lingman, Markus [VerfasserIn]

Links:

Volltext

Themen:

Deep learning
Explainable artificial intelligence
Heart failure
Journal Article
Machine learning
Readmission prediction
Research Support, Non-U.S. Gov't
Shallow learning

Anmerkungen:

Date Completed 30.10.2023

Date Revised 13.11.2023

published: Electronic

Citation Status MEDLINE

doi:

10.2196/46934

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM363815945