Unmasking bias in artificial intelligence : a systematic review of bias detection and mitigation strategies in electronic health record-based models

© The Author(s) 2024. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissionsoup.com..

OBJECTIVES: Leveraging artificial intelligence (AI) in conjunction with electronic health records (EHRs) holds transformative potential to improve healthcare. However, addressing bias in AI, which risks worsening healthcare disparities, cannot be overlooked. This study reviews methods to handle various biases in AI models developed using EHR data.

MATERIALS AND METHODS: We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, analyzing articles from PubMed, Web of Science, and IEEE published between January 01, 2010 and December 17, 2023. The review identified key biases, outlined strategies for detecting and mitigating bias throughout the AI model development, and analyzed metrics for bias assessment.

RESULTS: Of the 450 articles retrieved, 20 met our criteria, revealing 6 major bias types: algorithmic, confounding, implicit, measurement, selection, and temporal. The AI models were primarily developed for predictive tasks, yet none have been deployed in real-world healthcare settings. Five studies concentrated on the detection of implicit and algorithmic biases employing fairness metrics like statistical parity, equal opportunity, and predictive equity. Fifteen studies proposed strategies for mitigating biases, especially targeting implicit and selection biases. These strategies, evaluated through both performance and fairness metrics, predominantly involved data collection and preprocessing techniques like resampling and reweighting.

DISCUSSION: This review highlights evolving strategies to mitigate bias in EHR-based AI models, emphasizing the urgent need for both standardized and detailed reporting of the methodologies and systematic real-world testing and evaluation. Such measures are essential for gauging models' practical impact and fostering ethical AI that ensures fairness and equity in healthcare.

Medienart:

E-Artikel

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

Zur Gesamtaufnahme - volume:31

Enthalten in:

Journal of the American Medical Informatics Association : JAMIA - 31(2024), 5 vom: 19. Apr., Seite 1172-1183

Sprache:

Englisch

Beteiligte Personen:

Chen, Feng [VerfasserIn]
Wang, Liqin [VerfasserIn]
Hong, Julie [VerfasserIn]
Jiang, Jiaqi [VerfasserIn]
Zhou, Li [VerfasserIn]

Links:

Volltext

Themen:

Artificial intelligence
Bias
Deep learning
Electronic health record
Journal Article
Scoping review
Systematic Review

Anmerkungen:

Date Completed 22.04.2024

Date Revised 26.04.2024

published: Print

Citation Status MEDLINE

doi:

10.1093/jamia/ocae060

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM370102886