Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry : Comparative Mixed Methods Study

©Kostis Giannakopoulos, Argyro Kavadella, Anas Aaqel Salim, Vassilis Stamatopoulos, Eleftherios G Kaklamanos. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.12.2023..

BACKGROUND: The increasing application of generative artificial intelligence large language models (LLMs) in various fields, including dentistry, raises questions about their accuracy.

OBJECTIVE: This study aims to comparatively evaluate the answers provided by 4 LLMs, namely Bard (Google LLC), ChatGPT-3.5 and ChatGPT-4 (OpenAI), and Bing Chat (Microsoft Corp), to clinically relevant questions from the field of dentistry.

METHODS: The LLMs were queried with 20 open-type, clinical dentistry-related questions from different disciplines, developed by the respective faculty of the School of Dentistry, European University Cyprus. The LLMs' answers were graded 0 (minimum) to 10 (maximum) points against strong, traditionally collected scientific evidence, such as guidelines and consensus statements, using a rubric, as if they were examination questions posed to students, by 2 experienced faculty members. The scores were statistically compared to identify the best-performing model using the Friedman and Wilcoxon tests. Moreover, the evaluators were asked to provide a qualitative evaluation of the comprehensiveness, scientific accuracy, clarity, and relevance of the LLMs' answers.

RESULTS: Overall, no statistically significant difference was detected between the scores given by the 2 evaluators; therefore, an average score was computed for every LLM. Although ChatGPT-4 statistically outperformed ChatGPT-3.5 (P=.008), Bing Chat (P=.049), and Bard (P=.045), all models occasionally exhibited inaccuracies, generality, outdated content, and a lack of source references. The evaluators noted instances where the LLMs delivered irrelevant information, vague answers, or information that was not fully accurate.

CONCLUSIONS: This study demonstrates that although LLMs hold promising potential as an aid in the implementation of evidence-based dentistry, their current limitations can lead to potentially harmful health care decisions if not used judiciously. Therefore, these tools should not replace the dentist's critical thinking and in-depth understanding of the subject matter. Further research, clinical validation, and model improvements are necessary for these tools to be fully integrated into dental practice. Dental practitioners must be aware of the limitations of LLMs, as their imprudent use could potentially impact patient care. Regulatory measures should be established to oversee the use of these evolving technologies.

Medienart:

E-Artikel

Erscheinungsjahr:

2023

Erschienen:

2023

Enthalten in:

Zur Gesamtaufnahme - volume:25

Enthalten in:

Journal of medical Internet research - 25(2023) vom: 28. Dez., Seite e51580

Sprache:

Englisch

Beteiligte Personen:

Giannakopoulos, Kostis [VerfasserIn]
Kavadella, Argyro [VerfasserIn]
Aaqel Salim, Anas [VerfasserIn]
Stamatopoulos, Vassilis [VerfasserIn]
Kaklamanos, Eleftherios G [VerfasserIn]

Links:

Volltext

Themen:

AI
Artificial intelligence
ChatGPT
Clinical decision-making
Clinical practice
Clinical practice guidelines
Dental practice
Dental professional
Evidence-based dentistry
Generative pretrained transformers
Google Bard
Journal Article
Large language models
Microsoft Bing
Research Support, Non-U.S. Gov't

Anmerkungen:

Date Completed 29.12.2023

Date Revised 14.01.2024

published: Electronic

Citation Status MEDLINE

doi:

10.2196/51580

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM365002070