Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry : Comparative Mixed Methods Study
©Kostis Giannakopoulos, Argyro Kavadella, Anas Aaqel Salim, Vassilis Stamatopoulos, Eleftherios G Kaklamanos. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.12.2023..
BACKGROUND: The increasing application of generative artificial intelligence large language models (LLMs) in various fields, including dentistry, raises questions about their accuracy.
OBJECTIVE: This study aims to comparatively evaluate the answers provided by 4 LLMs, namely Bard (Google LLC), ChatGPT-3.5 and ChatGPT-4 (OpenAI), and Bing Chat (Microsoft Corp), to clinically relevant questions from the field of dentistry.
METHODS: The LLMs were queried with 20 open-type, clinical dentistry-related questions from different disciplines, developed by the respective faculty of the School of Dentistry, European University Cyprus. The LLMs' answers were graded 0 (minimum) to 10 (maximum) points against strong, traditionally collected scientific evidence, such as guidelines and consensus statements, using a rubric, as if they were examination questions posed to students, by 2 experienced faculty members. The scores were statistically compared to identify the best-performing model using the Friedman and Wilcoxon tests. Moreover, the evaluators were asked to provide a qualitative evaluation of the comprehensiveness, scientific accuracy, clarity, and relevance of the LLMs' answers.
RESULTS: Overall, no statistically significant difference was detected between the scores given by the 2 evaluators; therefore, an average score was computed for every LLM. Although ChatGPT-4 statistically outperformed ChatGPT-3.5 (P=.008), Bing Chat (P=.049), and Bard (P=.045), all models occasionally exhibited inaccuracies, generality, outdated content, and a lack of source references. The evaluators noted instances where the LLMs delivered irrelevant information, vague answers, or information that was not fully accurate.
CONCLUSIONS: This study demonstrates that although LLMs hold promising potential as an aid in the implementation of evidence-based dentistry, their current limitations can lead to potentially harmful health care decisions if not used judiciously. Therefore, these tools should not replace the dentist's critical thinking and in-depth understanding of the subject matter. Further research, clinical validation, and model improvements are necessary for these tools to be fully integrated into dental practice. Dental practitioners must be aware of the limitations of LLMs, as their imprudent use could potentially impact patient care. Regulatory measures should be established to oversee the use of these evolving technologies.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:25 |
---|---|
Enthalten in: |
Journal of medical Internet research - 25(2023) vom: 28. Dez., Seite e51580 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Giannakopoulos, Kostis [VerfasserIn] |
---|
Links: |
---|
Anmerkungen: |
Date Completed 29.12.2023 Date Revised 14.01.2024 published: Electronic Citation Status MEDLINE |
---|
doi: |
10.2196/51580 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM365002070 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | NLM365002070 | ||
003 | DE-627 | ||
005 | 20240114235019.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.2196/51580 |2 doi | |
028 | 5 | 2 | |a pubmed24n1259.xml |
035 | |a (DE-627)NLM365002070 | ||
035 | |a (NLM)38009003 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Giannakopoulos, Kostis |e verfasserin |4 aut | |
245 | 1 | 0 | |a Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry |b Comparative Mixed Methods Study |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 29.12.2023 | ||
500 | |a Date Revised 14.01.2024 | ||
500 | |a published: Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a ©Kostis Giannakopoulos, Argyro Kavadella, Anas Aaqel Salim, Vassilis Stamatopoulos, Eleftherios G Kaklamanos. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.12.2023. | ||
520 | |a BACKGROUND: The increasing application of generative artificial intelligence large language models (LLMs) in various fields, including dentistry, raises questions about their accuracy | ||
520 | |a OBJECTIVE: This study aims to comparatively evaluate the answers provided by 4 LLMs, namely Bard (Google LLC), ChatGPT-3.5 and ChatGPT-4 (OpenAI), and Bing Chat (Microsoft Corp), to clinically relevant questions from the field of dentistry | ||
520 | |a METHODS: The LLMs were queried with 20 open-type, clinical dentistry-related questions from different disciplines, developed by the respective faculty of the School of Dentistry, European University Cyprus. The LLMs' answers were graded 0 (minimum) to 10 (maximum) points against strong, traditionally collected scientific evidence, such as guidelines and consensus statements, using a rubric, as if they were examination questions posed to students, by 2 experienced faculty members. The scores were statistically compared to identify the best-performing model using the Friedman and Wilcoxon tests. Moreover, the evaluators were asked to provide a qualitative evaluation of the comprehensiveness, scientific accuracy, clarity, and relevance of the LLMs' answers | ||
520 | |a RESULTS: Overall, no statistically significant difference was detected between the scores given by the 2 evaluators; therefore, an average score was computed for every LLM. Although ChatGPT-4 statistically outperformed ChatGPT-3.5 (P=.008), Bing Chat (P=.049), and Bard (P=.045), all models occasionally exhibited inaccuracies, generality, outdated content, and a lack of source references. The evaluators noted instances where the LLMs delivered irrelevant information, vague answers, or information that was not fully accurate | ||
520 | |a CONCLUSIONS: This study demonstrates that although LLMs hold promising potential as an aid in the implementation of evidence-based dentistry, their current limitations can lead to potentially harmful health care decisions if not used judiciously. Therefore, these tools should not replace the dentist's critical thinking and in-depth understanding of the subject matter. Further research, clinical validation, and model improvements are necessary for these tools to be fully integrated into dental practice. Dental practitioners must be aware of the limitations of LLMs, as their imprudent use could potentially impact patient care. Regulatory measures should be established to oversee the use of these evolving technologies | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Research Support, Non-U.S. Gov't | |
650 | 4 | |a AI | |
650 | 4 | |a ChatGPT | |
650 | 4 | |a Google Bard | |
650 | 4 | |a Microsoft Bing | |
650 | 4 | |a artificial intelligence | |
650 | 4 | |a clinical decision-making | |
650 | 4 | |a clinical practice | |
650 | 4 | |a clinical practice guidelines | |
650 | 4 | |a dental practice | |
650 | 4 | |a dental professional | |
650 | 4 | |a evidence-based dentistry | |
650 | 4 | |a generative pretrained transformers | |
650 | 4 | |a large language models | |
700 | 1 | |a Kavadella, Argyro |e verfasserin |4 aut | |
700 | 1 | |a Aaqel Salim, Anas |e verfasserin |4 aut | |
700 | 1 | |a Stamatopoulos, Vassilis |e verfasserin |4 aut | |
700 | 1 | |a Kaklamanos, Eleftherios G |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Journal of medical Internet research |d 1999 |g 25(2023) vom: 28. Dez., Seite e51580 |w (DE-627)NLM116127104 |x 1438-8871 |7 nnns |
773 | 1 | 8 | |g volume:25 |g year:2023 |g day:28 |g month:12 |g pages:e51580 |
856 | 4 | 0 | |u http://dx.doi.org/10.2196/51580 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 25 |j 2023 |b 28 |c 12 |h e51580 |