Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy
© 2024, The American College of Clinical Pharmacology..
ChatGPT is a language model that was trained on a large dataset including medical literature. Several studies have described the performance of ChatGPT on medical exams. In this study, we examine its performance in answering factual knowledge questions regarding clinical pharmacy. Questions were obtained from a Dutch application that features multiple-choice questions to maintain a basic knowledge level for clinical pharmacists. In total, 264 clinical pharmacy-related questions were presented to ChatGPT and responses were evaluated for accuracy, concordance, quality of the substantiation, and reproducibility. Accuracy was defined as the correctness of the answer, and results were compared to the overall score by pharmacists over 2022. Responses were marked concordant if no contradictions were present. The quality of the substantiation was graded by two independent pharmacists using a 4-point scale. Reproducibility was established by presenting questions multiple times and on various days. ChatGPT yielded accurate responses for 79% of the questions, surpassing pharmacists' accuracy of 66%. Concordance was 95%, and the quality of the substantiation was deemed good or excellent for 73% of the questions. Reproducibility was consistently high, both within day and between days (>92%), as well as across different users. ChatGPT demonstrated a higher accuracy and reproducibility to factual knowledge questions related to clinical pharmacy practice than pharmacists. Consequently, we posit that ChatGPT could serve as a valuable resource to pharmacists. We hope the technology will further improve, which may lead to enhanced future performance.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2024 |
---|---|
Erschienen: |
2024 |
Enthalten in: |
Zur Gesamtaufnahme - year:2024 |
---|---|
Enthalten in: |
Journal of clinical pharmacology - (2024) vom: 16. Apr. |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
van Nuland, Merel [VerfasserIn] |
---|
Links: |
---|
Themen: |
Artificial intelligence |
---|
Anmerkungen: |
Date Revised 16.04.2024 published: Print-Electronic Citation Status Publisher |
---|
doi: |
10.1002/jcph.2443 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM371130360 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM371130360 | ||
003 | DE-627 | ||
005 | 20240416233727.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240416s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1002/jcph.2443 |2 doi | |
028 | 5 | 2 | |a pubmed24n1377.xml |
035 | |a (DE-627)NLM371130360 | ||
035 | |a (NLM)38623909 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a van Nuland, Merel |e verfasserin |4 aut | |
245 | 1 | 0 | |a Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Revised 16.04.2024 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status Publisher | ||
520 | |a © 2024, The American College of Clinical Pharmacology. | ||
520 | |a ChatGPT is a language model that was trained on a large dataset including medical literature. Several studies have described the performance of ChatGPT on medical exams. In this study, we examine its performance in answering factual knowledge questions regarding clinical pharmacy. Questions were obtained from a Dutch application that features multiple-choice questions to maintain a basic knowledge level for clinical pharmacists. In total, 264 clinical pharmacy-related questions were presented to ChatGPT and responses were evaluated for accuracy, concordance, quality of the substantiation, and reproducibility. Accuracy was defined as the correctness of the answer, and results were compared to the overall score by pharmacists over 2022. Responses were marked concordant if no contradictions were present. The quality of the substantiation was graded by two independent pharmacists using a 4-point scale. Reproducibility was established by presenting questions multiple times and on various days. ChatGPT yielded accurate responses for 79% of the questions, surpassing pharmacists' accuracy of 66%. Concordance was 95%, and the quality of the substantiation was deemed good or excellent for 73% of the questions. Reproducibility was consistently high, both within day and between days (>92%), as well as across different users. ChatGPT demonstrated a higher accuracy and reproducibility to factual knowledge questions related to clinical pharmacy practice than pharmacists. Consequently, we posit that ChatGPT could serve as a valuable resource to pharmacists. We hope the technology will further improve, which may lead to enhanced future performance | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a ChatGPT | |
650 | 4 | |a artificial intelligence | |
650 | 4 | |a clinical pharmacology | |
650 | 4 | |a exam questions | |
650 | 4 | |a language model | |
700 | 1 | |a Erdogan, Abdullah |e verfasserin |4 aut | |
700 | 1 | |a Aςar, Cenkay |e verfasserin |4 aut | |
700 | 1 | |a Contrucci, Ramon |e verfasserin |4 aut | |
700 | 1 | |a Hilbrants, Sven |e verfasserin |4 aut | |
700 | 1 | |a Maanach, Lamyae |e verfasserin |4 aut | |
700 | 1 | |a Egberts, Toine |e verfasserin |4 aut | |
700 | 1 | |a van der Linden, Paul D |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Journal of clinical pharmacology |d 1973 |g (2024) vom: 16. Apr. |w (DE-627)NLM000005576 |x 1552-4604 |7 nnns |
773 | 1 | 8 | |g year:2024 |g day:16 |g month:04 |
856 | 4 | 0 | |u http://dx.doi.org/10.1002/jcph.2443 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |j 2024 |b 16 |c 04 |