Using artificial intelligence (#AI) ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study
Background: Chemical ocular injuries are a major public health issue, causing eye damage from harmful chemicals and potentially leading to severe vision loss or blindness if not treated promptly and effectively. Although medical knowledge has advanced, accessing reliable and understandable information on these injuries remains challenging due to unverified online content and complex terminology. artificial intelligence (#AI) (AI) tools like ChatGPT provide a promising solution by simplifying medical information and making it more accessible to the general public. Objective: This study aims to assess the use of ChatGPT in providing reliable, accurate, and accessible medical information on chemical ocular injuries. It evaluates the correctness, thematic accuracy, and coherence of ChatGPT’s responses compared to established medical guidelines and explores its potential for patient education. Methods: Nine questions were entered to ChatGPT regarding various aspects of chemical ocular injuries, including : definition, prevalence, etiology, prevention, symptoms, diagnosis, treatment, follow- up, and complications. The responses provided by ChatGPT were compared to the ICD-9 and ICD-10 guidelines for chemical (alkali and acid) injuries of the conjunctiva and cornea. The evaluation focused on criteria such as correctness, thematic accuracy, coherence to assess the accuracy of ChatGPT’s answers. The inputs were categorized into three distinct groups, and statistical analyses, including Flesch–Kincaid readability tests, ANOVA, and trend analysis, were conducted to assess their readability, complexity and trends. Results: The results showed that ChatGPT provided accurate and coherent responses for most questions about chemical ocular injuries, demonstrating thematic relevance. However, the responses sometimes overlooked critical clinical details or guideline-specific elements, such as emphasizing the urgency of care, using precise classification systems, and addressing detailed diagnostic or management protocols. While the answers were generally valid, they occasionally included less relevant or overly generalized information, reducing their consistency with established medical guidelines. The average FRES was 33.84 ± 0.28, indicating a fairly challenging reading level, while the FKGL averaged 14.21 ± 0.22, suitable for readers with college-level proficiency. Passive voice was used in 7.22% ± 0.66% of sentences, indicating moderate reliance. Statistical analysis showed no significant differences in FRES (p =.385), FKGL (p =.555), or passive sentence usage (p =.601) across categories, as determined by one-way ANOVA. Readability remained relatively constant across the three categories as determined by trend analysis . Conclusions: ChatGPT shows strong potential in providing accurate and relevant information about chemical ocular injuries. However, Its language complexity may prevent accessibility for individuals with lower health literacy and sometimes miss critical aspects. Future improvements should focus on enhancing readability, increasing context-specific accuracy, and tailoring responses to person needs and literacy levels. While ChatGPT can be a helpful tool for patients and healthcare professionals, it shouldn’t replace professional medical advice, as some responses might not match clinical practice or address the needs of patients with different levels of education.