Colonoscopy competence assessment tools: A systematic review of validity evidence

Endoscopy ◽  
2021 ◽  
Author(s):  
Rishad Khan ◽  
Eric Zheng ◽  
Sachin Wani ◽  
Michael A Scaffidi ◽  
Thurarshen Jeyalingam ◽  
...  

Background: Assessment tools are essential for endoscopy training, required to support feedback provision, optimize learner capabilities, and document competence. We aimed to evaluate the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity. Methods: We systematically searched five databases for studies investigating colonoscopy direct observation assessment tools from inception until April 8, 2020. We extracted data outlining validity evidence from the five sources (content, response process, internal structure, relations to other variables, and consequences) and graded the degree of evidence, with a maximum score of 15. We assessed educational utility using an Accreditation Council for Graduate Medical Education framework and methodological quality using the Medical Education Research Quality Instrument (MERSQI). Results: From 10,841 records, we identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, 1 both). All tools assessed technical skills, while 10 assessed cognitive and integrative skills. Validity evidence scores ranged from 1-15. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence, with scores of 13, 15, and 14, respectively. Most tools were easy to use and interpret and required minimal resources. MERSQI scores ranged from 9.5-11.5 (maximum score 14.5). Conclusions: The ACE, DOPS, and GiECAT have strong validity evidence compared to other assessments. Future studies should identify barriers to widespread implementation and report on use of these tools in credentialing examinations.

2021 ◽  
Vol 4 (Supplement_1) ◽  
pp. 71-73
Author(s):  
R Khan ◽  
E Zheng ◽  
S B Wani ◽  
M A Scaffidi ◽  
T Jeyalingam ◽  
...  

Abstract Background An increasing focus on quality and safety in colonoscopy has led to broader implementation of competency-based educational systems that enable documentation of trainees’ achievement of the knowledge, skills, and attitudes needed for independent practice. The meaningful assessment of competence in colonoscopy is critical to this process. While there are many published tools that assess competence in performing colonoscopy, there is a wide range of underlying validity evidence. Tools with strong evidence of validity are required to support feedback provision, optimize learner capabilities, and document competence. Aims We aimed to evaluate the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity. Methods We systematically searched five databases for studies investigating colonoscopy direct observation assessment tools from inception until April 8, 2020. We extracted data outlining validity evidence from the five sources (content, response process, internal structure, relations to other variables, and consequences) and graded the degree of evidence, with a maximum score of 15. We assessed educational utility using an Accreditation Council for Graduate Medical Education framework and methodological quality using the Medical Education Research Quality Instrument (MERSQI). Results From 10,841 records, we identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, 1 both). All tools assessed technical skills, while 10 assessed cognitive and integrative skills. Validity evidence scores ranged from 1–15. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence, with scores of 13, 15, and 14, respectively. Most tools were easy to use and interpret and required minimal resources. MERSQI scores ranged from 9.5–11.5 (maximum score 14.5). Conclusions The ACE, DOPS, and GiECAT have strong validity evidence compared to other assessments. Future studies should identify barriers to widespread implementation and report on use of these tools in credentialing purposes. Funding Agencies None


2017 ◽  
Vol 9 (4) ◽  
pp. 473-478 ◽  
Author(s):  
Glenn Rosenbluth ◽  
Natalie J. Burman ◽  
Sumant R. Ranji ◽  
Christy K. Boscardin

ABSTRACT Background  Improving the quality of health care and education has become a mandate at all levels within the medical profession. While several published quality improvement (QI) assessment tools exist, all have limitations in addressing the range of QI projects undertaken by learners in undergraduate medical education, graduate medical education, and continuing medical education. Objective  We developed and validated a tool to assess QI projects with learner engagement across the educational continuum. Methods  After reviewing existing tools, we interviewed local faculty who taught QI to understand how learners were engaged and what these faculty wanted in an ideal assessment tool. We then developed a list of competencies associated with QI, established items linked to these competencies, revised the items using an iterative process, and collected validity evidence for the tool. Results  The resulting Multi-Domain Assessment of Quality Improvement Projects (MAQIP) rating tool contains 9 items, with criteria that may be completely fulfilled, partially fulfilled, or not fulfilled. Interrater reliability was 0.77. Untrained local faculty were able to use the tool with minimal guidance. Conclusions  The MAQIP is a 9-item, user-friendly tool that can be used to assess QI projects at various stages and to provide formative and summative feedback to learners at all levels.


2020 ◽  
Vol 12 (4) ◽  
pp. 447-454
Author(s):  
Cristina E. Welch ◽  
Melissa M. Carbajal ◽  
Shelley Kumar ◽  
Satid Thammasitboon

ABSTRACT Background Recent studies showed that psychological safety is important to resident perception of the work environment, and improved psychological safety improves resident satisfaction survey scores. However, there is no evidence in medical education literature specifically addressing relationships between psychological safety and learning behaviors or its impact on learning outcomes. Objective We developed and gathered validity evidence for a group learning environment assessment tool using Edmondson's Teaming Theory and Webb's Depth of Knowledge model as a theoretical framework. Methods In 2018, investigators developed the preliminary tool. The authors administered the resulting survey to neonatology faculty and trainees at Baylor College of Medicine morning report sessions and collected validity evidence (content, response process, and internal structure) to describe the instrument's psychometric properties. Results Between December 2018 and July 2019, 450 surveys were administered, and 393 completed surveys were collected (87% response rate). Exploratory factor analysis and confirmatory factor analysis testing the 3-factor measurement model of the 15-item tool showed acceptable fit of the hypothesized model with standardized root mean square residual = 0.034, root mean square error approximation = 0.088, and comparative fit index = 0.987. Standardized path coefficients ranged from 0.66 to 0.97. Almost all absolute standardized residual correlations were less than 0.10. Cronbach's alpha scores showed internal consistency of the constructs. There was a high correlation among the constructs. Conclusions Validity evidence suggests the developed group learning assessment tool is a reliable instrument to assess psychological safety, learning behaviors, and learning outcomes during group learning sessions such as morning report.


2020 ◽  
Vol 17 ◽  
Author(s):  
Anthony Clement Smith ◽  
Ann Framp ◽  
Patrea Andersen

Introduction With the recent introduction of registration for paramedics, and an absence of assessment tools that align undergraduate paramedic student practice to competency standards, this pilot study undertook to develop and evaluate a competency assessment tool designed to provide a standardised approach to student competency assessment. This paper reports the first part of a two-part enquiry evaluating the efficacy of the Australasian Paramedic Competency Assessment Tool (APCAT) to assess the practice competency of undergraduate paramedic students. Methods With a focus on gathering professional opinion to evaluate the usability of the tool and inform its development, a mixed methods methodology including a survey and open-ended questions were used to gather data from paramedic educators and on-road assessors in Australia and New Zealand. Data were analysed using descriptive statistics and content analysis. Results The outcome of the evaluation was positive, indicating that 81% agreed or strongly agreed that the tool was user-friendly; 71% believed that expectations of student performance and the grading system was clear; 70% found year level descriptors reflected practice expectations; and 66% believed that the resource manual provided adequate guidance. Conclusion The APCAT is simple and aligns student practice expectations with competency standards. Results indicate the support for a consistent approach for assessment of undergraduate paramedic student competence. Further research will be undertaken to determine the efficacy of using this tool to assess students in the clinical setting.


2017 ◽  
Vol 8 (1) ◽  
pp. e106-122 ◽  
Author(s):  
Isabelle N Colmers-Gray ◽  
Kieran Walsh ◽  
Teresa M Chan

Background: Competency-based medical education is becoming the new standard for residency programs, including Emergency Medicine (EM). To inform programmatic restructuring, guide resources and identify gaps in publication, we reviewed the published literature on types and frequency of resident assessment.Methods: We searched MEDLINE, EMBASE, PsycInfo and ERIC from Jan 2005 - June 2014. MeSH terms included “assessment,” “residency,” and “emergency medicine.” We included studies on EM residents reporting either of two primary outcomes: 1) assessment type and 2) assessment frequency per resident. Two reviewers screened abstracts, reviewed full text studies, and abstracted data. Reporting of assessment-related costs was a secondary outcome.Results: The search returned 879 articles; 137 articles were full-text reviewed; 73 met inclusion criteria. Half of the studies (54.8%) were pilot projects and one-quarter (26.0%) described fully implemented assessment tools/programs. Assessment tools (n=111) comprised 12 categories, most commonly: simulation-based assessments (28.8%), written exams (28.8%), and direct observation (26.0%). Median assessment frequency (n=39 studies) was twice per month/rotation (range: daily to once in residency). No studies thoroughly reported costs.Conclusion: EM resident assessment commonly uses simulation or direct observation, done once-per-rotation. Implemented assessment systems and assessment-associated costs are poorly reported. Moving forward, routine publication will facilitate transitioning to competency-based medical education.


Endoscopy ◽  
2018 ◽  
Vol 50 (08) ◽  
pp. 770-778 ◽  
Author(s):  
Keith Siau ◽  
Paul Dunckley ◽  
Roland Valori ◽  
Mark Feeney ◽  
Neil Hawkes ◽  
...  

Abstract Background Direct Observation of Procedural Skills (DOPS) is an established competence assessment tool in endoscopy. In July 2016, the DOPS scoring format changed from a performance-based scale to a supervision-based scale. We aimed to evaluate the impact of changes to the DOPS scale format on the distribution of scores in novice trainees and on competence assessment. Methods We performed a prospective, multicenter (n = 276), observational study of formative DOPS assessments in endoscopy trainees with ≤ 100 lifetime procedures. DOPS were submitted in the 6-months before July 2016 (old scale) and after (new scale) for gastroscopy (n = 2998), sigmoidoscopy (n = 1310), colonoscopy (n = 3280), and polypectomy (n = 631). Scores for old and new DOPS were aligned to a 4-point scale and compared. Results 8219 DOPS (43 % new and 57 % old) submitted for 1300 trainees were analyzed. Compared with old DOPS, the use of the new DOPS was associated with greater utilization of the lowest score (2.4 % vs. 0.9 %; P < 0.001), broader range of scores, and a reduction in competent scores (60.8 % vs. 86.9 %; P < 0.001). The reduction in competent scores was evident on subgroup analysis across all procedure types (P < 0.001) and for each quartile of endoscopy experience. The new DOPS was superior in characterizing the endoscopy learning curve by demonstrating progression of competent scores across quartiles of procedural experience. Conclusions Endoscopy assessors applied a greater range of scores using the new DOPS scale based on degree of supervision in two cohorts of trainees matched for experience. Our study provides construct validity evidence in support of the new scale format.


BMJ Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. e034468 ◽  
Author(s):  
Nicholas Holt ◽  
Kirsty Crowe ◽  
Daniel Lynagh ◽  
Zoe Hutcheson

BackgroundPoor communication between healthcare professionals is recognised as accounting for a significant proportion of adverse patient outcomes. In the UK, the General Medical Council emphasises effective handover (handoff) as an essential outcome for medical graduates. Despite this, a significant proportion of medical schools do not teach the skill.ObjectivesThis study had two aims: (1) demonstrate a need for formal handover training through assessing the pre-existing knowledge, skills and attitudes of medical students and (2) study the effectiveness of a pilot educational handover workshop on improving confidence and competence in structured handover skills.DesignStudents underwent an Objective Structured Clinical Examination style handover competency assessment before and after attending a handover workshop underpinned by educational theory. Participants also completed questionnaires before and after the workshop. The tool used to measure competency was developed through a modified Delphi process.SettingMedical education departments within National Health Service (NHS) Lanarkshire hospitals.ParticipantsForty-two undergraduate medical students rotating through their medical and surgical placements within NHS Lanarkshire enrolled in the study. Forty-one students completed all aspects.Main outcome measuresPaired questionnaires, preworkshop and postworkshop, ascertained prior teaching and confidence in handover skills. The questionnaires also elicited the student’s views on the importance of handover and the potential effects on patient safety. The assessment tool measured competency over 12 domains.ResultsEighty-three per cent of participants reported no previous handover teaching. There was a significant improvement, p<0.0001, in confidence in delivering handovers after attending the workshop. Student performance in the handover competency assessment showed a significant improvement (p<0.05) in 10 out of the 12 measured handover competency domains.ConclusionsA simple, robust and reproducible intervention, underpinned by medical education theory, can significantly improve competence and confidence in medical handover. Further research is required to assess long-term outcomes as student’s transition from undergraduate to postgraduate training.


Author(s):  
Z. Bokhua ◽  
K. Chelidze ◽  
K. Ebralidze

Background. New challenges of permanently changing context of healthcare system requires new methods of medical education and new assessment tools, as well. Competency-based Medical Education (CBME), framework which has been adopted as a new approach in medical education, needs appropriate assessment tool such as portfolio. Portfolio is learner-centered assessment instrument which evaluates learner’s progression towards outcomes and enables both residents and teachers to engage in a process of learning through assessment. Objective. In this paper we aim to share our successful experience of an effective use of web-based 5-Dimensional Electronic Portfolio (5DeP) as an assessment tool in the Pilot Group. Methods. Pilot Group of sixteen residents (six first year residents of Obstetrics/Gynecology and ten first year residents of Internal Medicine. Tbilisi State Medical University Institute of Postgraduate Medical Education and Continuous Professional Development) and twelve mentors (four Obstetrics/Gynecology mentor and eight Internal Medicine mentors) reported some feedback about 5-Dimensional Electronic Portfolio (5DeP) as a new assessment tool. Results. Feedback about 5-Dimensional Electronic Portfolio (5DeP) as a new assessment tool from mentors and students demonstrated efficiency of the program. It enables assessment within a framework of transparent and declared criteria and learning objectives; provides a model for lifelong learning and continuing professional development; increases competence in a wider context with benefits to both professional and personal roles; improves organizing skills. Conclusions. 5DeP have been recognized as an extremely effective assessment tool.


Author(s):  
Hsing-Chen Yang

How, apart from by conveying professional knowledge, can university medical education nurture and improve the gender competency of medical students and thereby create an LGBT+ friendly healthcare environment? This study explored the use of game-based teaching activities in competency-based teaching from the perspective of competency-based medical education (CBME) and employed a qualitative case-study methodology. We designed an LGBT+ Health and Medical Care course in a medical school. Feedback was collected from two teachers and 19 medical students using in-depth interviews and thematic analysis was used to analyze the collected data. The findings of this study were as follows: (1) Games encouraged student participation and benefited gender knowledge transmission and transformation through competency learning, and (2) games embodied the idea of assessment as learning. The enjoyable feeling of pressure from playing games motivated students to learn. Using games as both a teaching activity and an assessment tool provided the assessment and instant feedback required in the CBME learning process. Game-based teaching successfully guided medical students to learn about gender and achieve the learning goals of integrating knowledge, attitudes, and skills. To fully implement CBME using games as teaching methods, teaching activities, learning tasks, and assessment tools, teachers must improve their teaching competency. This study revealed that leading discussions and designing curricula are key in the implementation of gender competency-based education; in particular, the ability to lead discussions is the core factor. Game-based gender competency education for medical students can be facilitated with discussions that reinforce learning outcomes to achieve the objectives of gender equality education and LGBT+ friendly healthcare. The results of this study indicated that game-based CBME with specific teaching strategies was an effective method of nurturing the gender competency of medical students. The consequent integration of gender competency into medical education could achieve the goal of LGBT+ friendly healthcare.


2020 ◽  
Vol 16 (1) ◽  
pp. 117-135 ◽  
Author(s):  
Aaron Redman ◽  
Arnim Wiek ◽  
Matthias Barth

AbstractWhile there is growing agreement on the competencies sustainability professionals should possess as well as the pedagogies to develop them, the practice of assessing students’ sustainability competencies is still in its infancy. Despite growing interest among researchers, there has not yet been a systematic review of how students’ sustainability competencies are currently assessed. This review article responds to this need by examining what tools are currently used for assessing students’ sustainability competencies to inform future practice. A systematic literature review was conducted for publications through the end of 2019, resulting in 75 relevant studies that detail the use of an assessment tool. We analyzed the described tools regarding their main features, strengths and weaknesses, as well as potential improvements. Based on this analysis, we first propose a typology of eight assessment tools, which fall into three meta-types: self-perceiving, observation, and test-based approaches, providing specific examples of practice for all tools. We then articulate strengths and weaknesses as well as potential improvements for each tool (type). This study structures the field of sustainability competency assessment, provides a criteria-based overview of the currently used tools, and highlights promising future developments. For the practice, it provides guidance to sustainability (science) instructors, researchers, and program directors who are interested in using competencies assessment tools in more informed ways.


Sign in / Sign up

Export Citation Format

Share Document