Nutrients, Vol. 17, Pages 3828: Efficacy of Large Language Models in Providing Evidence-Based Patient Education for Celiac Disease: A Comparative Analysis

Nutrients, Vol. 17, Pages 3828: Efficacy of Large Language Models in Providing Evidence-Based Patient Education for Celiac Disease: A Comparative Analysis

Nutrients doi: 10.3390/nu17243828

Authors:
Luisa Bertin
Federica Branchi
Carolina Ciacci
Anne R. Lee
David S. Sanders
Nick Trott
Fabiana Zingone

Background/Objectives: Large language models (LLMs) show promise for patient education, yet their safety and efficacy for chronic diseases requiring lifelong management remain unclear. This study presents the first comprehensive comparative evaluation of three leading LLMs for celiac disease patient education. Methods: We conducted a cross-sectional evaluation comparing ChatGPT-4, Claude 3.7, and Gemini 2.0 using six blinded clinical specialists (four gastroenterologists and two dietitians). Twenty questions spanning four domains (general understanding, symptoms/diagnosis, diet/nutrition, lifestyle management) were evaluated for scientific accuracy, clarity (5-point Likert scales), misinformation presence, and readability using validated computational metrics (Flesch Reading Ease, Flesch-Kincaid Grade Level, SMOG index). Results: Gemini 2.0 demonstrated superior performance across multiple dimensions. Gemini 2.0 achieved the highest scientific accuracy ratings (median 4.5 [IQR: 4.5–5.0] vs. 4.0 [IQR: 4.0–4.5] for both competitors, p = 0.015) and clarity scores (median 5.0 [IQR: 4.5–5.0] vs. 4.0 [IQR: 4.0–4.5], p = 0.011). While Gemini 2.0 showed numerically lower misinformation rates (13.3% vs. 23.3% for ChatGPT–4 and 24.2% for Claude 3.7), differences were not statistically significant (p = 0.778). Gemini 2.0 achieved significantly superior readability, requiring approximately 2–3 fewer years of education for comprehension (median Flesch-Kincaid Grade Level 9.8 [IQR: 8.8–10.3] vs. 12.5 for both competitors, p < 0.001). However, all models exceeded recommended 6th–8th grade health literacy targets. Conclusions: While Gemini 2.0 demonstrated statistically significant advantages in accuracy, clarity, and readability, misinformation rates of 13.3–24.2% across all models represent concerning risk levels for direct patient applications. AI offers valuable educational support but requires healthcare provider supervision until misinformation rates improve.

​Background/Objectives: Large language models (LLMs) show promise for patient education, yet their safety and efficacy for chronic diseases requiring lifelong management remain unclear. This study presents the first comprehensive comparative evaluation of three leading LLMs for celiac disease patient education. Methods: We conducted a cross-sectional evaluation comparing ChatGPT-4, Claude 3.7, and Gemini 2.0 using six blinded clinical specialists (four gastroenterologists and two dietitians). Twenty questions spanning four domains (general understanding, symptoms/diagnosis, diet/nutrition, lifestyle management) were evaluated for scientific accuracy, clarity (5-point Likert scales), misinformation presence, and readability using validated computational metrics (Flesch Reading Ease, Flesch-Kincaid Grade Level, SMOG index). Results: Gemini 2.0 demonstrated superior performance across multiple dimensions. Gemini 2.0 achieved the highest scientific accuracy ratings (median 4.5 [IQR: 4.5–5.0] vs. 4.0 [IQR: 4.0–4.5] for both competitors, p = 0.015) and clarity scores (median 5.0 [IQR: 4.5–5.0] vs. 4.0 [IQR: 4.0–4.5], p = 0.011). While Gemini 2.0 showed numerically lower misinformation rates (13.3% vs. 23.3% for ChatGPT–4 and 24.2% for Claude 3.7), differences were not statistically significant (p = 0.778). Gemini 2.0 achieved significantly superior readability, requiring approximately 2–3 fewer years of education for comprehension (median Flesch-Kincaid Grade Level 9.8 [IQR: 8.8–10.3] vs. 12.5 for both competitors, p < 0.001). However, all models exceeded recommended 6th–8th grade health literacy targets. Conclusions: While Gemini 2.0 demonstrated statistically significant advantages in accuracy, clarity, and readability, misinformation rates of 13.3–24.2% across all models represent concerning risk levels for direct patient applications. AI offers valuable educational support but requires healthcare provider supervision until misinformation rates improve. Read More

Full text for top nursing and allied health literature.

X