scholarly journals The Ultrasound Competency Assessment Tool (UCAT): Development and Evaluation of a Novel Competency‐based Assessment Tool for Point‐of‐care Ultrasound

Author(s):  
Colin Bell ◽  
Andrew K. Hall ◽  
Natalie Wagner ◽  
Louise Rang ◽  
Joseph Newbigging ◽  
...  
2015 ◽  
Vol 7 (4) ◽  
pp. 567-573 ◽  
Author(s):  
Paru Patrawalla ◽  
Lewis Ari Eisen ◽  
Ariel Shiloh ◽  
Brijen J. Shah ◽  
Oleksandr Savenkov ◽  
...  

ABSTRACT Background Point-of-care ultrasound is an emerging technology in critical care medicine. Despite requirements for critical care medicine fellowship programs to demonstrate knowledge and competency in point-of-care ultrasound, tools to guide competency-based training are lacking. Objective We describe the development and validity arguments of a competency assessment tool for critical care ultrasound. Methods A modified Delphi method was used to develop behaviorally anchored checklists for 2 ultrasound applications: “Perform deep venous thrombosis study (DVT)” and “Qualify left ventricular function using parasternal long axis and parasternal short axis views (Echo).” One live rater and 1 video rater evaluated performance of 28 fellows. A second video rater evaluated a subset of 10 fellows. Validity evidence for content, response process, and internal consistency was assessed. Results An expert panel finalized checklists after 2 rounds of a modified Delphi method. The DVT checklist consisted of 13 items, including 1.00 global rating step (GRS). The Echo checklist consisted of 14 items, and included 1.00 GRS for each of 2 views. Interrater reliability evaluated with a Cohen kappa between the live and video rater was 1.00 for the DVT GRS, 0.44 for the PSLA GRS, and 0.58 for the PSSA GRS. Cronbach α was 0.85 for DVT and 0.92 for Echo. Conclusions The findings offer preliminary evidence for the validity of competency assessment tools for 2 applications of critical care ultrasound and data on live versus video raters.


2020 ◽  
Vol 12 (2) ◽  
pp. 176-184 ◽  
Author(s):  
Irene W. Y. Ma ◽  
Janeve Desy ◽  
Michael Y. Woo ◽  
Andrew W. Kirkpatrick ◽  
Vicki E. Noble

ABSTRACT Background Point-of-care ultrasound (POCUS) is increasingly used in a number of medical specialties. To support competency-based POCUS education, workplace-based assessments are essential. Objective We developed a consensus-based assessment tool for POCUS skills and determined which items are critical for competence. We then performed standards setting to set cut scores for the tool. Methods Using a modified Delphi technique, 25 experts voted on 32 items over 3 rounds between August and December 2016. Consensus was defined as agreement by at least 80% of the experts. Twelve experts then performed 3 rounds of a standards setting procedure in March 2017 to establish cut scores. Results Experts reached consensus for 31 items to include in the tool. Experts reached consensus that 16 of those items were critically important. A final cut score for the tool was established at 65.2% (SD 17.0%). Cut scores for critical items are significantly higher than those for noncritical items (76.5% ± SD 12.4% versus 53.1% ± SD 12.2%, P < .0001). Conclusions We reached consensus on a 31-item workplace-based assessment tool for identifying competence in POCUS. Of those items, 16 were considered critically important. Their importance is further supported by higher cut scores compared with noncritical items.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Colin Bell ◽  
Natalie Wagner ◽  
Andrew Hall ◽  
Joseph Newbigging ◽  
Louise Rang ◽  
...  

Abstract Background Point-of-care ultrasound (POCUS) has been recognized as an essential skill across medicine. However, a lack of reliable and streamlined POCUS assessment tools with demonstrated validity remains a significant barrier to widespread clinical integration. The ultrasound competency assessment tool (UCAT) was derived to be a simple, entrustment-based competency assessment tool applicable to multiple POCUS applications. When used to assess a FAST, the UCAT demonstrated high internal consistency and moderate-to-excellent inter-rater reliability. The objective of this study was to validate the UCAT for assessment of a four-view transthoracic cardiac POCUS. Results Twenty-two trainees performed a four-view transthoracic cardiac POCUS in a simulated environment while being assessed by two observers. When used to assess a four-view cardiac POCUS the UCAT retained its high internal consistency ($$\alpha =0.90)$$ α = 0.90 ) and moderate-to-excellent inter-rater reliability (ICCs = 0.61–0.91; p’s ≤ 0.01) across all domains. The regression analysis suggestion that level of training, previous number of focused cardiac ultrasound, previous number of total scans, self-rated entrustment, and intent to pursue certification statistically significantly predicted UCAT entrustment scores [F (5,16) = 4.06, p = 0.01; R2 = 0.56]. Conclusion This study confirms the UCAT is a valid assessment tool for four-view transthoracic cardiac POCUS. The findings from this work and previous studies on the UCAT demonstrate the utility and flexibility of the UCAT tool across multiple POCUS applications and present a promising way forward for POCUS competency assessment.


CJEM ◽  
2019 ◽  
Vol 21 (S1) ◽  
pp. S38-S39
Author(s):  
C. McKaigney ◽  
C. Bell ◽  
A. Hall

Innovation Concept: Assessment of residents' Point of Care Ultrasound (PoCUS) competency currently relies on heterogenous and unvalidated methods, such as the completion of a number of proctored studies. Although number of performed studies may be associated with ability, it is not necessarily a surrogate for competence. Our goal was to create a single Ultrasound Competency Assessment Tool (UCAT) using domain-anchored entrustment scoring. Methods: The UCAT was developed as an anchored global assessment score, building on a previously validated simulation-based assessment tool. It was designed to measure performance across the domains of Preparation, Image Acquisition, Image Optimization, and Clinical Integration, in addition to providing a final entrustment score (i.e., OSCORE). A modified Delphi method was used to establish national expert consensus on anchors for each domain. Three surveys were distributed to the CAEP Ultrasound Committee between July-November 2018. The first survey asked members to appraise and modify a list of anchor options created by the authors. Next, collated responses from the first survey were redistributed for a re-appraisal. Finally, anchors obtaining >65% approval from the second survey were condensed and redistributed for final consensus. Curriculum, Tool or Material: Twenty-two, 26, and 22 members responded to the surveys, respectively. Each anchor achieved >90% final agreement. The final anchors for the domains were: Preparation – positioning, initial settings, ensures clean transducer, probe selection, appropriate clinical indication; Image Acquisition – appropriate measurements, hand position, identifies landmarks, visualization of target, efficiency of probe motion, troubleshoots technical limitations; Image Optimization – centers area of interest, overall image quality, troubleshoots patient obstacles, optimizes settings; Clinical Integration – appropriate interpretation, understands limitations, utilizes information appropriately, performs multiple scans if needed, communicates findings, considers false positive and negative causes of findings. Conclusion: The UCAT is a novel assessment tool that has the potential to play a central role in the training and evaluation of residents. Our use of a modified Delphi method, involving key stakeholders in PoCUS education, ensures that the UCAT has a high degree of process and content validity. An important next step in determining its construct validity is to evaluate the use of the UCAT in a multi-centered examination setting.


Author(s):  
Mohammed Khalidi Idrissi ◽  
Meriem Hnida ◽  
Samir Bennani

Competency-based Assessment (CBA) is the measurement of student's competency against a standard of performance. It is a process of collecting evidences to analyze student's progress and achievement. In higher education, Competency-based Assessment puts the focus on learning outcomes to constantly improve academic programs and meet labor market demands. As of to date, competencies are described using natural language but rarely used in e-learning systems, and the common sense idea is that: the way competency is defined shapes the way it is conceptualized, implemented and assessed. The main objective of this chapter is to introduce and discuss Competency-based Assessment from a methodological and technical perspectives. More specifically, the objective is to highlight ongoing issues regarding competency assessment in higher education in the 21st century, to emphasis the benefits of its implementation and finally to discuss some competency modeling and assessment techniques.


Author(s):  
Rachel Han ◽  
Julia Keith ◽  
Elzbieta Slodkowska ◽  
Sharon Nofech-Mozes ◽  
Bojana Djordjevic ◽  
...  

Context.— Competency-based medical education relies on frequent formative in-service assessments to ascertain trainee progression. Currently at our institution, trainees receive a summative end-of-rotation In-Training Evaluation Report based on feedback collected from staff pathologists. There is no method of simulating report sign-out. Objective.— To develop a formative in-service assessment tool that is able to simulate report sign-out and provide case-by-case feedback to trainees. Further, to compare time- versus competency-based assessment models. Design.— Twenty-one pathology trainees were assessed for 20 months. Hot Seat Diagnosis by trainees and trainee assessment by pathologists were recorded in the Laboratory Information System. In the first iteration, trainees were assessed by using a time-based assessment scale on their ability to diagnose, report, use ancillary testings, comment on clinical implications, provide intraoperative consultation and/or gross cases. The second iteration used a competency-based assessment scale. Trainees and pathologists completed surveys on the effectiveness of the In-Training Evaluation Report versus the Hot Seat Diagnosis tool. Results.— Scores from both iterations correlated significantly with other assessment tools including the Resident In-Service Examination (r = 0.93, P = .04 and r = 0.87, P = .03). The competency-based model was better able to demonstrate improvement over time and stratify junior versus senior trainees than the time-based model. Trainees and pathologists rated Hot Seat Diagnosis as significantly more objective, detailed, and timely than the In-Training Evaluation Report, and effective at simulating report sign-out. Conclusions.— Hot Seat Diagnosis is an effective tool for the formative in-service assessment of pathology trainees and simulation of report sign-out, with the competency-based model outperforming the time-based model.


Sign in / Sign up

Export Citation Format

Share Document