Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?
This study assessed the comprehensiveness and readability of medical information about complex regional pain syndrome provided by ChatGPT, an artificial intelligence (AI) chatbot, and Google using standardized scoring systems. A Google search was conducted using the term "complex regional pain syndrome," and the first 10 frequently asked questions (FAQs) and answers generated were recorded. ChatGPT was presented these FAQs generated by Google, and its responses were evaluated alongside Google's answers using multiple metrics. ChatGPT was then asked to generate its own set of 10 FAQs and answers. ChatGPT's answers were significantly longer than Google's in response to both independently generated questions (330.0 ± 51.3 words, P < 0.0001) and Google-generated questions (289.7 ± 40.6 words, P < 0.0001). ChatGPT's answers to Google-generated questions were more difficult to read based on the Flesch-Kincaid Reading Ease Score (13.6 ± 10.8, P = 0.017). Our findings suggest that ChatGPT is a promising tool for patient education regarding complex regional pain syndrome based on its ability to generate a variety of question topics with responses from credible sources. That said, challenges such as readability and ethical considerations must be addressed prior to its widespread use for health information.