当前位置: X-MOL 学术Am. J. Rhinol. Allergy › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Assessing the Readability, Reliability, and Quality of AI-Modified and Generated Patient Education Materials for Endoscopic Skull Base Surgery
American Journal of Rhinology & Allergy ( IF 2.5 ) Pub Date : 2024-08-22 , DOI: 10.1177/19458924241273055
Michael Warn 1 , Leo L T Meller 2 , Daniella Chan 3 , Sina J Torabi 3 , Benjamin F Bitner 3 , Bobby A Tajudeen 4 , Edward C Kuan 3
Affiliation  

BackgroundDespite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.ObjectiveTo examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.MethodsAn article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article de novo. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.ResultsSixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, p < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had “poor” quality.ConclusionsChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.

中文翻译:


评估人工智能修改和生成的内窥镜颅底手术患者教育材料的可读性、可靠性和质量



背景尽管美国国立卫生研究院和美国医学会建议发布六年级或以下识字水平的在线患者教育材料,但与内窥镜颅底手术 (ESBS) 相关的材料缺乏可读性和质量。 ChatGPT 是一种人工智能 (AI) 系统,能够合成大量互联网数据以生成对用户查询的响应,但其在改进患者教育材料方面的效用尚未被探索。 目的检查在线患者教育材料的可读性和质量的当前状态,并确定ChatGPT 在改进文章和生成患者教育材料方面的实用性。方法利用 10 个与 ESBS 相关的不同搜索词进行文章搜索。使用 ChatGPT 修改了十篇最难读的现有面向患者的文章,并使用迭代查询从头生成文章。 Flesch Reading Ease (FRE) 和相关指标衡量整体可读性和内容素养水平,而 DISCERN 则评估文章可靠性和质量。结果找到了 66 篇文章。 ChatGPT 提高了从大学到十年级水平的 10 篇最难读的在线文章的 FRE 可读性(19.7 ± 4.4 与 56.9 ± 5.9,p < 0.001)。生成的文章比 48.5% 的文章更具可读性(38.9 vs. 39.4 ± 12.4),质量比 94% 的文章更高(51.0 vs. 37.6 ± 6.1)。 56.7% 的在线文章质量“差”。结论ChatGPT 提高了文章的可读性,但大多数文章仍高于患者教育材料建议的读写水平。 通过迭代查询,与大多数现有在线文章相比,ChatGPT 可以生成更可靠、更高质量的患者教育材料,并且可以进行定制以匹配平均在线文章的可读性。
更新日期:2024-08-22
down
wechat
bug