Advertisement ยท 728 ร— 90
#
Hashtag
#HealthcareProfessions
Advertisement ยท 728 ร— 90
Preview
News and Notes

#NurseSky
#MedSky
#HealthcareProfessions
cno.org/news/news-an...

0 0 0 0
Preview
Teaching Clinical Reasoning in Health Care Professions Learners Using AI-Generated Script Concordance Tests: Mixed Methods Formative Evaluation Background: The integration of artificial intelligence (#AI) (AI) in medical education is evolving, offering new tools to enhance teaching and assessment. Among these, script concordance tests (SCT) are well suited to evaluate clinical reasoning in contexts of uncertainty. Traditionally, SCTs require expert panels for scoring and feedback, which can be resource intensive. Recent advances in generative AI, particularly large language models (LLM), suggest the possibility of replacing human experts with simulated ones, though this potential remains underexplored. Objective: This study aimed to evaluate whether LLMs can effectively simulate expert judgment in SCTs, by using generative AI to author, score, and provide feedback for SCTs in cardiology and pneumology. A secondary goal was to assess studentsโ€™ perceptions of the testโ€™s difficulty and the pedagogical value of AI-generated feedback. Methods: A cross-sectional, mixed-methods study was conducted with 25 second-year medical students who completed a 32-item SCT authored by ChatGPT-4o. Six LLMs (three trained on course material and three untrained) served as simulated experts to generate scoring keys and feedback. Students answered SCT questions, rated perceived difficulty, and selected the most helpful feedback explanation for each item. Quantitative analysis included scoring, difficulty ratings, and correlation between student and AI responses. Qualitative comments were thematically analyzed. Results: The average student score was 22.8 out of 32 (SD = 1.6), with scores ranging from 19.75 to 26.75. Trained AI systems showed significantly higher concordance with student responses (ฯ = 0.64) than untrained models (ฯ = 0.41). AI-generated feedback was rated as most helpful in 62.5% of cases, especially when provided by trained models. The SCT demonstrated good internal consistency (Cronbachโ€™s ฮฑ = 0.76), and students reported moderate perceived difficulty (mean=3.7/7). Qualitative feedback highlighted appreciation for SCTs as reflective tools, while recommending clearer guidance on Likert-scale use and more contextual detail in vignettes. Conclusions: This is among the first studies to demonstrate that trained generative AI models can reliably simulate expert clinical reasoning in a script concordance framework. The findings suggest that AI can both streamline SCT design and offer educational valuable feedback without compromising authenticity. Future studies should explore longitudinal effects on learning and assess how hybrid models (human and AI) can optimize reasoning instruction in medical education.

JMIR Formative Res: Teaching Clinical Reasoning in Health Care Professions Learners Using AI-Generated Script Concordance Tests: Mixed Methods Formative Evaluation #ClinicalReasoning #ArtificialIntelligence #MedicalEducation #HealthCareProfessions #GenerativeAI

1 0 0 0
Preview
TestSet launches health division in Europe and UK | News | Research live First-party data creator TestSet, part of Ackwest Group, has introduced a new healthcare division to the UK and Europe.

#ResearchLive #marketresearch #TestSet #health #Europe #UK #datacreator #AckwestGroup #healthcare #TestSetHealth #RecknerHealthcarePanel #healthcareprofessions #qualitativeresearch #quantitativeresearch #VinceWills #Dynata #ResearchNow #SSI #M3Global #ColinTurnerKerr
zurl.co/MZnNJ

1 0 0 0

You canโ€™t โ€œhustleโ€ your
way out of anxiety.
You have to heal.

#doctors #healthcareprofessions #burnout #anxiety

0 0 0 0