scholarly journals Using Virtual Patients to Explore the Clinical Reasoning Skills of Medical Students: A Mixed Methods Study (Preprint)

Author(s):  
Ruth Plackett ◽  
Angelos P Kassianos ◽  
Jessica Timmis ◽  
Jessica Sheringham ◽  
Patricia Schartau ◽  
...  
2020 ◽  
Author(s):  
Ruth Plackett ◽  
Angelos P Kassianos ◽  
Jessica Timmis ◽  
Jessica Sheringham ◽  
Patricia Schartau ◽  
...  

BACKGROUND Improving clinical reasoning skills — the thought processes used by clinicians during consultations to formulate appropriate questions and diagnoses — is essential for reducing missed diagnostic opportunities. The electronic Clinical Reasoning Educational Simulation Tool (eCREST) was developed to improve future doctors’ clinical reasoning skills. A feasibility study demonstrated acceptability and potential impacts but the processes by which students developed their clinical reasoning is unknown. OBJECTIVE To identify and characterize final-year medical students’ clinical reasoning strategies while using eCREST; to explore how students interacted with eCREST. METHODS A sequential mixed methods design was used. Quantitative data captured in a feasibility trial across three UK medical schools (n=148) was used to identify typologies of reasoning, based on the proportion of essential information students identified and the proportion of relevant questions they asked a virtual patient. Strategies were compared between the intervention and control group. A qualitative think-aloud and semi-structured interview study was then undertaken with 16 final year medical students from one medical school to explore how students reasoned while using eCREST. Themes generated from qualitative data were used to expand the typologies of strategies. RESULTS Three types of clinical reasoning strategy were identified: ‘Focused’ (elicited most essential information and asked few irrelevant questions; n=78/148, 53%), ‘Thorough’ (elicited most essential information but asked many irrelevant questions; n=33/148, 22%) and ‘Succinct’ (elicited little essential information but asked few irrelevant questions; n=27/148, 18%). One group were ‘Non-strategic’ (did not elicit enough essential information and asked mostly irrelevant questions; n=10/148, 7%). In the feasibility trial, the intervention group, were significantly more likely to adopt a ‘Thorough’ strategy than controls (21/78, 27% vs 6/70, 9%) and less likely to adopt a ‘Succinct’ strategy (13/78, 17% vs 20/70, 29%); χ2 (3)=9.87, P=.02. Use of other strategies were similar across groups. Thematic analysis identified three dimensions underpinning reasoning: data gathering processes, generating diagnostic hypotheses, confidence and uncertainty. The mixed methods analysis indicated that those classified as ‘Thorough’ asked many questions to avoid missing key information and reported that eCREST helped them to manage uncertainty. The ‘Succinct’ group aimed to limit the number of questions asked and eCREST helped them to focus on asking pertinent questions. The ‘Focused’ group had clear rationales for asking questions but those who used a ‘Non-strategic’ approach did not and may have found eCREST less useful in developing their clinical reasoning. CONCLUSIONS Students apply a range of clinical reasoning strategies to online patient simulations like eCREST. eCREST led students to use more ‘Thorough’ strategies and students reported it helped them to manage uncertainty, which could help future doctors to identify missed diagnostic opportunities. eCREST could also be used by educators to support students to develop their clinical reasoning strategies.


2010 ◽  
Vol 1 (2) ◽  
pp. e89-e95
Author(s):  
Jean-Francois Lemay ◽  
Tyrone Donnon ◽  
Bernard Charlin

Background: The Script Concordance (SC) approach was used as an alternative test format to measure the presence of knowledge organization reflective in one’s clinical reasoning skills (i.e., diagnostic, investigation and treatment knowledge).Methods:  The present study investigated the reliability and validity of a 40-item paediatric version of the SC test with three groups representing 53 medical students (novices), 42 paediatric residents (intermediates) and 11 paediatricians (experts).Results:  A comparison between scoring techniques based on experts’ ratings of the items showed internal reliability coefficients from .74 for the one-best answer up to .78 for alternative scoring techniques.  An ANOVA showed an increase in test performance from medical students through to expert paediatricians (F(2,103) = 84.05, p < .001), but did not differentiate between the postgraduate year 1 to 3 paediatric residents.  A large effect size (Cohen’s d) difference of 1.06 was found between medical students and residents total SC test scores.Conclusions:  These results support other findings indicating the SC test format can be used to differentiate between the clinical reasoning skills of novices, intermediates and experts in paediatrics.  An alternative scoring method that includes one best answer and partial marks was also supported for grading SC test items.


MedEdPORTAL ◽  
2015 ◽  
Vol 11 (1) ◽  
Author(s):  
Roy Strowd ◽  
Anthony Kwan ◽  
Tiana Cruz ◽  
Charlene Gamaldo ◽  
Rachel Salas

Sign in / Sign up

Export Citation Format

Share Document