scholarly journals Assessing competence of undergraduate paramedic student practice: a preliminary evaluation of the Australasian Paramedic Competency Assessment Tool

2020 ◽  
Vol 17 ◽  
Author(s):  
Anthony Clement Smith ◽  
Ann Framp ◽  
Patrea Andersen

Introduction With the recent introduction of registration for paramedics, and an absence of assessment tools that align undergraduate paramedic student practice to competency standards, this pilot study undertook to develop and evaluate a competency assessment tool designed to provide a standardised approach to student competency assessment. This paper reports the first part of a two-part enquiry evaluating the efficacy of the Australasian Paramedic Competency Assessment Tool (APCAT) to assess the practice competency of undergraduate paramedic students. Methods With a focus on gathering professional opinion to evaluate the usability of the tool and inform its development, a mixed methods methodology including a survey and open-ended questions were used to gather data from paramedic educators and on-road assessors in Australia and New Zealand. Data were analysed using descriptive statistics and content analysis. Results The outcome of the evaluation was positive, indicating that 81% agreed or strongly agreed that the tool was user-friendly; 71% believed that expectations of student performance and the grading system was clear; 70% found year level descriptors reflected practice expectations; and 66% believed that the resource manual provided adequate guidance. Conclusion The APCAT is simple and aligns student practice expectations with competency standards. Results indicate the support for a consistent approach for assessment of undergraduate paramedic student competence. Further research will be undertaken to determine the efficacy of using this tool to assess students in the clinical setting.

2017 ◽  
Vol 9 (4) ◽  
pp. 473-478 ◽  
Author(s):  
Glenn Rosenbluth ◽  
Natalie J. Burman ◽  
Sumant R. Ranji ◽  
Christy K. Boscardin

ABSTRACT Background  Improving the quality of health care and education has become a mandate at all levels within the medical profession. While several published quality improvement (QI) assessment tools exist, all have limitations in addressing the range of QI projects undertaken by learners in undergraduate medical education, graduate medical education, and continuing medical education. Objective  We developed and validated a tool to assess QI projects with learner engagement across the educational continuum. Methods  After reviewing existing tools, we interviewed local faculty who taught QI to understand how learners were engaged and what these faculty wanted in an ideal assessment tool. We then developed a list of competencies associated with QI, established items linked to these competencies, revised the items using an iterative process, and collected validity evidence for the tool. Results  The resulting Multi-Domain Assessment of Quality Improvement Projects (MAQIP) rating tool contains 9 items, with criteria that may be completely fulfilled, partially fulfilled, or not fulfilled. Interrater reliability was 0.77. Untrained local faculty were able to use the tool with minimal guidance. Conclusions  The MAQIP is a 9-item, user-friendly tool that can be used to assess QI projects at various stages and to provide formative and summative feedback to learners at all levels.


2021 ◽  
Vol 4 (Supplement_1) ◽  
pp. 71-73
Author(s):  
R Khan ◽  
E Zheng ◽  
S B Wani ◽  
M A Scaffidi ◽  
T Jeyalingam ◽  
...  

Abstract Background An increasing focus on quality and safety in colonoscopy has led to broader implementation of competency-based educational systems that enable documentation of trainees’ achievement of the knowledge, skills, and attitudes needed for independent practice. The meaningful assessment of competence in colonoscopy is critical to this process. While there are many published tools that assess competence in performing colonoscopy, there is a wide range of underlying validity evidence. Tools with strong evidence of validity are required to support feedback provision, optimize learner capabilities, and document competence. Aims We aimed to evaluate the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity. Methods We systematically searched five databases for studies investigating colonoscopy direct observation assessment tools from inception until April 8, 2020. We extracted data outlining validity evidence from the five sources (content, response process, internal structure, relations to other variables, and consequences) and graded the degree of evidence, with a maximum score of 15. We assessed educational utility using an Accreditation Council for Graduate Medical Education framework and methodological quality using the Medical Education Research Quality Instrument (MERSQI). Results From 10,841 records, we identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, 1 both). All tools assessed technical skills, while 10 assessed cognitive and integrative skills. Validity evidence scores ranged from 1–15. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence, with scores of 13, 15, and 14, respectively. Most tools were easy to use and interpret and required minimal resources. MERSQI scores ranged from 9.5–11.5 (maximum score 14.5). Conclusions The ACE, DOPS, and GiECAT have strong validity evidence compared to other assessments. Future studies should identify barriers to widespread implementation and report on use of these tools in credentialing purposes. Funding Agencies None


2012 ◽  
Vol 7 (4) ◽  
pp. 152-156 ◽  
Author(s):  
Jatin P. Ambegaonkar ◽  
Shane Caswell ◽  
Amanda Caswell

Context: Approved Clinical Instructors (ACIs) are integral to athletic training students' professional development. ACIs evaluate student clinical performance using assessment tools provided by educational programs. How ACI ratings of a student's clinical performance relate to their clinical grade remains unclear. Objective: To examine relationships between ACI evaluations of student clinical performance using an athletic training-specific inventory (Athletic Training Clinical Performance Inventory; ATCPI) and the student's clinical grade (CG) over a clinical experience. Design: Correlational. Setting: Large metropolitan university. Participants: 48 ACIs (M=20; F=28; Certified for 7.5+3.2yrs; ACIs for 3.2+1.5yrs) evaluating 62 undergraduate students (M=20; F=42). Interventions: ACIs completed the ATCPI twice (mid-semester, and end-of semester) during their student's clinical experience. The ATCPI is a 21-item instrument: Items 1–20 assess the student's clinical performance based on specific constructs (Specific) and item 21 is a rating of the student's overall clinical performance (Overall). ACIs also assigned students a clinical grade (CG). Pearson product-moment correlations examined relationships between Specific, Overall, and CG, with separate paired t-tests examining differences (p<.05). Main Outcome Measures: The ATCPI used a 4-point Likert-type scale anchored by 1 (Rarely) and 4 (Consistently), and CG (A=4, B=3, C=2 D =1, 0=F). Results: Two-hundred and sixty-six ATCPI instruments were completed over 4 academic years. The ATCPI demonstrated acceptable reliability (Cronbach's alpha=.88). All three measures were positively correlated (Specific and Overall, r(264)=.65, P <.001; Specific and CG r(264)=.63, P <.001; Overall and CG r(264)=.55, P<.001). No differences existed between Specific (3.5±0.4) and CG (3.5±0.7; t=.60, P =.55). However, Overall (3.6±0.7) was significantly higher than both Specific (t=−3.45, P<.000) and CG (t=2.05, P =.04). Conclusions: ACIs reliably assessed students' specific clinical performance and provided a relatively accurate grade. However, since the overall scores were higher than specific item scores, ACIs overestimated students' overall clinical performance. Additional research is necessary to examine the ATCPI as an assessment tool across multiple institutions and to determine how other variables affect ACI assessments of student performance.


BMJ Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. e034468 ◽  
Author(s):  
Nicholas Holt ◽  
Kirsty Crowe ◽  
Daniel Lynagh ◽  
Zoe Hutcheson

BackgroundPoor communication between healthcare professionals is recognised as accounting for a significant proportion of adverse patient outcomes. In the UK, the General Medical Council emphasises effective handover (handoff) as an essential outcome for medical graduates. Despite this, a significant proportion of medical schools do not teach the skill.ObjectivesThis study had two aims: (1) demonstrate a need for formal handover training through assessing the pre-existing knowledge, skills and attitudes of medical students and (2) study the effectiveness of a pilot educational handover workshop on improving confidence and competence in structured handover skills.DesignStudents underwent an Objective Structured Clinical Examination style handover competency assessment before and after attending a handover workshop underpinned by educational theory. Participants also completed questionnaires before and after the workshop. The tool used to measure competency was developed through a modified Delphi process.SettingMedical education departments within National Health Service (NHS) Lanarkshire hospitals.ParticipantsForty-two undergraduate medical students rotating through their medical and surgical placements within NHS Lanarkshire enrolled in the study. Forty-one students completed all aspects.Main outcome measuresPaired questionnaires, preworkshop and postworkshop, ascertained prior teaching and confidence in handover skills. The questionnaires also elicited the student’s views on the importance of handover and the potential effects on patient safety. The assessment tool measured competency over 12 domains.ResultsEighty-three per cent of participants reported no previous handover teaching. There was a significant improvement, p<0.0001, in confidence in delivering handovers after attending the workshop. Student performance in the handover competency assessment showed a significant improvement (p<0.05) in 10 out of the 12 measured handover competency domains.ConclusionsA simple, robust and reproducible intervention, underpinned by medical education theory, can significantly improve competence and confidence in medical handover. Further research is required to assess long-term outcomes as student’s transition from undergraduate to postgraduate training.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Abd Moain Abu Dabrh ◽  
Thomas A. Waller ◽  
Robert P. Bonacci ◽  
Anem J. Nawaz ◽  
Joshua J. Keith ◽  
...  

Abstract Background Interpersonal and Communication Skills (ICS) and Professionalism milestones are challenging to evaluate during medical training. Paucity in proficiency, direction and validity evidence of assessment tools of these milestones warrants further research. We validated the reliability of the previously-piloted Instrument for Communication skills and Professionalism Assessment (InCoPrA) in medical learners. Methods This validity approach was guided by the rigorous Kane’s Framework. Faculty-raters and standardized patients (SPs) used their respective InCoPrA sub-component to assess distinctive domains pertinent to ICS and Professionalism through multiple expert-built simulated-scenarios comparable to usual care. Evaluations included; inter-rater reliability of the faculty total score; the correlation between the total score by the SPs; and the average of the total score by two-faculty members. Participants were surveyed regarding acceptability, realism, and applicability of this experience. Results Eighty trainees and 25 faculty-raters from five medical residency training sites participated. ICC of the total score between faculty-raters was generally moderate (ICC range 0.44–0.58). There was on average a moderate linear relationship between the SPs and faculty total scores (Pearson correlations range 0.23–0.44). Majority of participants ascertained receiving a meaningful, immediate, and comprehensive patient-faculty feedback. Conclusions This work substantiated that InCoPrA was a reliable, standardized, evidence-based, and user-friendly assessment tool for ICS and Professionalism milestones. Validating InCoPrA showed generally-moderate agreeability and high acceptability. Using InCoPrA also promoted engaging all stakeholders in medical education and training–faculty, learners, and SPs—using simulation-media as pathway for comprehensive feedback of milestones growth.


2020 ◽  
Vol 16 (1) ◽  
pp. 117-135 ◽  
Author(s):  
Aaron Redman ◽  
Arnim Wiek ◽  
Matthias Barth

AbstractWhile there is growing agreement on the competencies sustainability professionals should possess as well as the pedagogies to develop them, the practice of assessing students’ sustainability competencies is still in its infancy. Despite growing interest among researchers, there has not yet been a systematic review of how students’ sustainability competencies are currently assessed. This review article responds to this need by examining what tools are currently used for assessing students’ sustainability competencies to inform future practice. A systematic literature review was conducted for publications through the end of 2019, resulting in 75 relevant studies that detail the use of an assessment tool. We analyzed the described tools regarding their main features, strengths and weaknesses, as well as potential improvements. Based on this analysis, we first propose a typology of eight assessment tools, which fall into three meta-types: self-perceiving, observation, and test-based approaches, providing specific examples of practice for all tools. We then articulate strengths and weaknesses as well as potential improvements for each tool (type). This study structures the field of sustainability competency assessment, provides a criteria-based overview of the currently used tools, and highlights promising future developments. For the practice, it provides guidance to sustainability (science) instructors, researchers, and program directors who are interested in using competencies assessment tools in more informed ways.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Colin Bell ◽  
Natalie Wagner ◽  
Andrew Hall ◽  
Joseph Newbigging ◽  
Louise Rang ◽  
...  

Abstract Background Point-of-care ultrasound (POCUS) has been recognized as an essential skill across medicine. However, a lack of reliable and streamlined POCUS assessment tools with demonstrated validity remains a significant barrier to widespread clinical integration. The ultrasound competency assessment tool (UCAT) was derived to be a simple, entrustment-based competency assessment tool applicable to multiple POCUS applications. When used to assess a FAST, the UCAT demonstrated high internal consistency and moderate-to-excellent inter-rater reliability. The objective of this study was to validate the UCAT for assessment of a four-view transthoracic cardiac POCUS. Results Twenty-two trainees performed a four-view transthoracic cardiac POCUS in a simulated environment while being assessed by two observers. When used to assess a four-view cardiac POCUS the UCAT retained its high internal consistency ($$\alpha =0.90)$$ α = 0.90 ) and moderate-to-excellent inter-rater reliability (ICCs = 0.61–0.91; p’s ≤ 0.01) across all domains. The regression analysis suggestion that level of training, previous number of focused cardiac ultrasound, previous number of total scans, self-rated entrustment, and intent to pursue certification statistically significantly predicted UCAT entrustment scores [F (5,16) = 4.06, p = 0.01; R2 = 0.56]. Conclusion This study confirms the UCAT is a valid assessment tool for four-view transthoracic cardiac POCUS. The findings from this work and previous studies on the UCAT demonstrate the utility and flexibility of the UCAT tool across multiple POCUS applications and present a promising way forward for POCUS competency assessment.


Endoscopy ◽  
2021 ◽  
Author(s):  
Rishad Khan ◽  
Eric Zheng ◽  
Sachin Wani ◽  
Michael A Scaffidi ◽  
Thurarshen Jeyalingam ◽  
...  

Background: Assessment tools are essential for endoscopy training, required to support feedback provision, optimize learner capabilities, and document competence. We aimed to evaluate the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity. Methods: We systematically searched five databases for studies investigating colonoscopy direct observation assessment tools from inception until April 8, 2020. We extracted data outlining validity evidence from the five sources (content, response process, internal structure, relations to other variables, and consequences) and graded the degree of evidence, with a maximum score of 15. We assessed educational utility using an Accreditation Council for Graduate Medical Education framework and methodological quality using the Medical Education Research Quality Instrument (MERSQI). Results: From 10,841 records, we identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, 1 both). All tools assessed technical skills, while 10 assessed cognitive and integrative skills. Validity evidence scores ranged from 1-15. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence, with scores of 13, 15, and 14, respectively. Most tools were easy to use and interpret and required minimal resources. MERSQI scores ranged from 9.5-11.5 (maximum score 14.5). Conclusions: The ACE, DOPS, and GiECAT have strong validity evidence compared to other assessments. Future studies should identify barriers to widespread implementation and report on use of these tools in credentialing examinations.


2018 ◽  
Vol 34 (4) ◽  
pp. 360-367 ◽  
Author(s):  
Kate L. Mandeville ◽  
Maja Valentic ◽  
Damir Ivankovic ◽  
Ivan Pristas ◽  
Jae Long ◽  
...  

Objectives:The aim of this study was to identify guidelines and assessment tools used by health technology agencies for quality assurance of registries and investigate the current use of registry data by HTA organizations worldwide.Methods:As part of a European Network for Health Technology Assessment Joint Action work package, we undertook a literature search and sent a questionnaire to all partner organizations on the work package and all organizations listed in the International Society for Pharmaco-economics and Outcomes Research directory.Results:We identified thirteen relevant documents relating to quality assurance of registries. We received fifty-five responses from organizations representing twenty-one different countries, a response rate of 40.5 percent (43/110). Many agencies, particularly in Europe, are already drawing on a range of registries to provide data for their HTA. Less than half, however, use criteria or standards to assess the quality of registry data. Nearly all criteria or standards in use have been internally defined by organizations rather than referring to those produced by an external body. A comparison of internal and external standards identified consistency in several quality dimensions, which can be used as a starting point for the development of a standardized tool.Conclusion:The use of registry data is more prevalent than expected, strengthening the need for a standardized registry quality assessment tool. A user-friendly tool developed in conjunction with stakeholders will support the consistent application of approved quality standards, and reassure critics who have traditionally considered registry data to be unreliable.


2018 ◽  
Vol 23 (suppl_1) ◽  
pp. e48-e49
Author(s):  
Julia DiLabio ◽  
Zia Bismilla ◽  
Emer Finan ◽  
Mohammed Ayoub ◽  
Hilal Almandhari ◽  
...  

Abstract BACKGROUND As paediatric training programs shift to a competency-based education model, there is a growing need for tools with strong evidence of validity to teach and assess procedural skills. To date, there are no competency-based assessment tools for bag mask ventilation or neonatal intubation that are widely accepted in the field of paediatrics. OBJECTIVES We aimed to develop a neonatal bag mask ventilation competency assessment tool (BMVCAT) and neonatal intubation competency assessment tool (NICAT) to assess proficiency in these skills for application in both the clinical and simulation-based training environments. Delphi methodology was used to determine expert consensus regarding critical items to be included. DESIGN/METHODS Systematic literature reviews were performed to generate potential items to include in the assessment tools, consisting of two parts: a checklist of specific actions required to complete the procedures competently and global ratings reflecting overall competence on general aspects of the skill. Checklist items were grouped into 3 domains: pre-procedure, intra-procedure, and post-procedure. A Delphi panel of North American neonatal experts was established to determine expert consensus regarding critical items required to objectively assess the competence of individuals performing neonatal bag mask ventilation and intubation. Panelists completed iterative surveys to rate the importance of checklist and global rating items using a 7-point Likert scale. Responses were evaluated and items were removed after each round if the mean rating was <5.5 until consensus was achieved. RESULTS Thirty-four experts from 26 centres in Canada (N=23) and the United States (N=11) participated in the Delphi process: 18 neonatologists, 9 neonatal nurses or nurse practitioners, 4 respiratory therapists, 2 paediatricians, and 1 paediatric anesthesiologist. Systematic literature reviews generated 48 checklist items and 23 global rating items for the BMVCAT and 67 checklist items and 24 global rating items for the NICAT. The first Delphi round reduced the BMVCAT to 43 checklist items and 20 global rating items and the NICAT to 63 checklist items and 23 global rating items. The second Delphi round reduced the BMVCAT to 27 checklist items and 16 global rating items and the NICAT to 50 checklist items and 22 global rating items. The Delphi process was continued until expert consensus was achieved to generate the BMVCAT and NICAT. CONCLUSION Delphi methodology allowed for the determination of consensus regarding essential items to be included in tools designed to measure competence in performing neonatal bag mask ventilation and intubation. Further studies are planned to prospectively validate the BMVCAT and NICAT in clinical and simulated settings.


Sign in / Sign up

Export Citation Format

Share Document