Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
© Author(s) (or their employer(s)) 2023. Re-use permitted under CC BY. Published by BMJ..
INTRODUCTION: Amid clinicians' challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis.
METHODS: We compared ChatGPT's scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch's t-test and Pearson's correlation coefficient.
RESULTS: Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in 'conclusion' (0.764 (95% CI 0.186, 0.280)) and the lowest in 'blinding' (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in 'harms' (r=0.32, p<0.001) and 'trial registration' (r=0.34, p=0.002), whereas the weakest were in 'intervention' (r=0.02, p<0.001) and 'objective' (r=0.06, p<0.001).
CONCLUSION: LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:30 |
---|---|
Enthalten in: |
BMJ health & care informatics - 30(2023), 1 vom: 12. Okt. |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Roberts, Richard Hr [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 23.10.2023 Date Revised 23.10.2023 published: Print Citation Status MEDLINE |
---|
doi: |
10.1136/bmjhci-2023-100830 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM363207341 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM363207341 | ||
003 | DE-627 | ||
005 | 20231226092843.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1136/bmjhci-2023-100830 |2 doi | |
028 | 5 | 2 | |a pubmed24n1210.xml |
035 | |a (DE-627)NLM363207341 | ||
035 | |a (NLM)37827724 | ||
035 | |a (PII)e100830 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Roberts, Richard Hr |e verfasserin |4 aut | |
245 | 1 | 0 | |a Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 23.10.2023 | ||
500 | |a Date Revised 23.10.2023 | ||
500 | |a published: Print | ||
500 | |a Citation Status MEDLINE | ||
520 | |a © Author(s) (or their employer(s)) 2023. Re-use permitted under CC BY. Published by BMJ. | ||
520 | |a INTRODUCTION: Amid clinicians' challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis | ||
520 | |a METHODS: We compared ChatGPT's scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch's t-test and Pearson's correlation coefficient | ||
520 | |a RESULTS: Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in 'conclusion' (0.764 (95% CI 0.186, 0.280)) and the lowest in 'blinding' (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in 'harms' (r=0.32, p<0.001) and 'trial registration' (r=0.34, p=0.002), whereas the weakest were in 'intervention' (r=0.02, p<0.001) and 'objective' (r=0.06, p<0.001) | ||
520 | |a CONCLUSION: LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Artificial intelligence | |
650 | 4 | |a Medical Informatics | |
700 | 1 | |a Ali, Stephen R |e verfasserin |4 aut | |
700 | 1 | |a Hutchings, Hayley A |e verfasserin |4 aut | |
700 | 1 | |a Dobbs, Thomas D |e verfasserin |4 aut | |
700 | 1 | |a Whitaker, Iain S |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t BMJ health & care informatics |d 2019 |g 30(2023), 1 vom: 12. Okt. |w (DE-627)NLM296588725 |x 2632-1009 |7 nnns |
773 | 1 | 8 | |g volume:30 |g year:2023 |g number:1 |g day:12 |g month:10 |
856 | 4 | 0 | |u http://dx.doi.org/10.1136/bmjhci-2023-100830 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 30 |j 2023 |e 1 |b 12 |c 10 |