Unmasking bias in artificial intelligence : a systematic review of bias detection and mitigation strategies in electronic health record-based models
© The Author(s) 2024. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissionsoup.com..
OBJECTIVES: Leveraging artificial intelligence (AI) in conjunction with electronic health records (EHRs) holds transformative potential to improve healthcare. However, addressing bias in AI, which risks worsening healthcare disparities, cannot be overlooked. This study reviews methods to handle various biases in AI models developed using EHR data.
MATERIALS AND METHODS: We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, analyzing articles from PubMed, Web of Science, and IEEE published between January 01, 2010 and December 17, 2023. The review identified key biases, outlined strategies for detecting and mitigating bias throughout the AI model development, and analyzed metrics for bias assessment.
RESULTS: Of the 450 articles retrieved, 20 met our criteria, revealing 6 major bias types: algorithmic, confounding, implicit, measurement, selection, and temporal. The AI models were primarily developed for predictive tasks, yet none have been deployed in real-world healthcare settings. Five studies concentrated on the detection of implicit and algorithmic biases employing fairness metrics like statistical parity, equal opportunity, and predictive equity. Fifteen studies proposed strategies for mitigating biases, especially targeting implicit and selection biases. These strategies, evaluated through both performance and fairness metrics, predominantly involved data collection and preprocessing techniques like resampling and reweighting.
DISCUSSION: This review highlights evolving strategies to mitigate bias in EHR-based AI models, emphasizing the urgent need for both standardized and detailed reporting of the methodologies and systematic real-world testing and evaluation. Such measures are essential for gauging models' practical impact and fostering ethical AI that ensures fairness and equity in healthcare.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2024 |
---|---|
Erschienen: |
2024 |
Enthalten in: |
Zur Gesamtaufnahme - volume:31 |
---|---|
Enthalten in: |
Journal of the American Medical Informatics Association : JAMIA - 31(2024), 5 vom: 19. Apr., Seite 1172-1183 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Chen, Feng [VerfasserIn] |
---|
Links: |
---|
Themen: |
Artificial intelligence |
---|
Anmerkungen: |
Date Completed 22.04.2024 Date Revised 26.04.2024 published: Print Citation Status MEDLINE |
---|
doi: |
10.1093/jamia/ocae060 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM370102886 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | NLM370102886 | ||
003 | DE-627 | ||
005 | 20240426234002.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240324s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1093/jamia/ocae060 |2 doi | |
028 | 5 | 2 | |a pubmed24n1388.xml |
035 | |a (DE-627)NLM370102886 | ||
035 | |a (NLM)38520723 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Chen, Feng |e verfasserin |4 aut | |
245 | 1 | 0 | |a Unmasking bias in artificial intelligence |b a systematic review of bias detection and mitigation strategies in electronic health record-based models |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 22.04.2024 | ||
500 | |a Date Revised 26.04.2024 | ||
500 | |a published: Print | ||
500 | |a Citation Status MEDLINE | ||
520 | |a © The Author(s) 2024. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissionsoup.com. | ||
520 | |a OBJECTIVES: Leveraging artificial intelligence (AI) in conjunction with electronic health records (EHRs) holds transformative potential to improve healthcare. However, addressing bias in AI, which risks worsening healthcare disparities, cannot be overlooked. This study reviews methods to handle various biases in AI models developed using EHR data | ||
520 | |a MATERIALS AND METHODS: We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, analyzing articles from PubMed, Web of Science, and IEEE published between January 01, 2010 and December 17, 2023. The review identified key biases, outlined strategies for detecting and mitigating bias throughout the AI model development, and analyzed metrics for bias assessment | ||
520 | |a RESULTS: Of the 450 articles retrieved, 20 met our criteria, revealing 6 major bias types: algorithmic, confounding, implicit, measurement, selection, and temporal. The AI models were primarily developed for predictive tasks, yet none have been deployed in real-world healthcare settings. Five studies concentrated on the detection of implicit and algorithmic biases employing fairness metrics like statistical parity, equal opportunity, and predictive equity. Fifteen studies proposed strategies for mitigating biases, especially targeting implicit and selection biases. These strategies, evaluated through both performance and fairness metrics, predominantly involved data collection and preprocessing techniques like resampling and reweighting | ||
520 | |a DISCUSSION: This review highlights evolving strategies to mitigate bias in EHR-based AI models, emphasizing the urgent need for both standardized and detailed reporting of the methodologies and systematic real-world testing and evaluation. Such measures are essential for gauging models' practical impact and fostering ethical AI that ensures fairness and equity in healthcare | ||
650 | 4 | |a Systematic Review | |
650 | 4 | |a Journal Article | |
650 | 4 | |a artificial intelligence | |
650 | 4 | |a bias | |
650 | 4 | |a deep learning | |
650 | 4 | |a electronic health record | |
650 | 4 | |a scoping review | |
700 | 1 | |a Wang, Liqin |e verfasserin |4 aut | |
700 | 1 | |a Hong, Julie |e verfasserin |4 aut | |
700 | 1 | |a Jiang, Jiaqi |e verfasserin |4 aut | |
700 | 1 | |a Zhou, Li |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Journal of the American Medical Informatics Association : JAMIA |d 1997 |g 31(2024), 5 vom: 19. Apr., Seite 1172-1183 |w (DE-627)NLM074735535 |x 1527-974X |7 nnns |
773 | 1 | 8 | |g volume:31 |g year:2024 |g number:5 |g day:19 |g month:04 |g pages:1172-1183 |
856 | 4 | 0 | |u http://dx.doi.org/10.1093/jamia/ocae060 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 31 |j 2024 |e 5 |b 19 |c 04 |h 1172-1183 |