Evaluation of Chat Generative Pre-trained Transformer’s responses to frequently asked questions about psoriatic arthritis: A study on quality and readability

Authors

Keywords:

ChatGPT, Artificial intelligence, Psoriatic arthritis, Quality information, Readability

Abstract

Aim: The growing use of artificial intelligence (AI) in healthcare, especially through technologies such as Chat Generative Pre-trained Transformer (ChatGPT), has led to concerns regarding the quality and readability of AI-generated health data. This study aimed to evaluate ChatGPT’s responses to frequently asked questions about psoriatic arthritis (PsA).

Materials and Methods: The quality of ChatGPT-generated responses was evaluated using the Ensuring Quality Information for Patients (EQIP) tool. Readability was assessed using the Flesch–Kincaid Reading Ease (FKRE) and Flesch–Kincaid Grade Level (FKGL) indices. The Kruskal–Wallis H test was used to compare subgroups, and Bonferroni correction was done for multiple comparisons.

Results: Significant differences were observed in EQIP scores across question subgroups, with treatment-related questions scoring lower than symptom-related questions. The FKRE and FKGL scores indicated that the information provided by ChatGPT could be challenging for patients with lower literacy levels.

Conclusion: Although ChatGPT provided relatively accurate information on PsA, its readability and ability to communicate complex medical information might be improved. These findings suggest the necessity for continual refinement of AI tools to address the diverse needs of patients.

Downloads

Download data is not yet available.

Downloads

Published

2025-02-26

Issue

Section

Original Articles

How to Cite

1.
Evaluation of Chat Generative Pre-trained Transformer’s responses to frequently asked questions about psoriatic arthritis: A study on quality and readability. Ann Med Res [Internet]. 2025 Feb. 26 [cited 2025 Mar. 9];32(2):079-84. Available from: http://annalsmedres.org/index.php/aomr/article/view/4812