Assessment of technical competence of candidates within a clinical pathology discipline

2017 ◽  
Author(s):  
◽  
Melini Baruth

Background Medical laboratories play a crucial role in patient care and require a competent skilled workforce to deliver this essential service. The current process of Medical Technologist training is a summative assessment consisting of two written 3 hour papers that correlates theoretical knowledge acquired at a tertiary level with the practical internship. Currently there is no assessment of technical competence of Intern Medical Technologists (candidates) by the HPCSA. Aim: This study aims to determine how technical competence was assessed for Intern Medical Technologists who are eligible to write the National Board Examination in the Clinical Pathology discipline. Methods: A quantitative design was used for assessing the technical competence of the candidates that were eligible to write the National Board Examination by using an adapted SANAS witnessing tool across ten Clinical Pathology test procedures by direct observation as well as to determine how technical competence is assessed in HPCSA registered training laboratories using a survey administered to Laboratory managers and trainers. The data was collected and analysed using the statistical software SPSS version 24.0. Results Some candidates that were directly observed in each of the Clinical Pathology test procedures were deemed not yet competent in compliance and adherence to SOP’s, acceptability of results, internal quality control procedures and the acceptability of the outcome and availability of signed training and competency records on the direct observation checklist. These results of the assessment of technical competence were compared to the results of the National Board examination that candidates wrote and there was no correlation between the two except for the Microbiology sub-discipline and the general section. Results of operations of competency assessment in 9 HPCSA registered Training Laboratories revealed that 100% of respondents have a technical competence laboratory policy, 90% identified the Laboratory Manager as having responsibility for ensuring assessment of staff competency, 100% stated that frequency of competency testing was upon initial employment and once in two years thereafter, 90% had clear criteria to define competency assessment and 100% indicated that the remedial process used in their laboratories was documented corrective action which included re-training and re-assessment. Conclusion: From this study it can be concluded that assessment of technical competency for Intern Medical Technologists in the Clinical Pathology could augment current assessment systems of Intern Medical Technologists for conferment of professional designation and a policy review is recommended.

2016 ◽  
Vol 30 (2) ◽  
pp. 104-107 ◽  
Author(s):  
Amilliah W. Kenya ◽  
John F. Hart ◽  
Charles K. Vuyiya

Objective: This study compared National Board of Chiropractic Examiners part I test scores between students who did and did not serve as tutors on the subject matter. Methods: Students who had a prior grade point average of 3.45 or above on a 4.0 scale just before taking part I of the board exams were eligible to participate. A 2-sample t-test was used to ascertain the difference in the mean scores on part I between the tutor group (n = 28) and nontutor (n = 29) group. Results: Scores were higher in all subjects for the tutor group compared to the nontutor group and the differences were statistically significant (p < .01) with large effect sizes. Conclusion: The tutors in this study performed better on part I of the board examination compared to nontutors, suggesting that tutoring results in an academic benefit for tutors themselves.


2005 ◽  
Vol 129 (10) ◽  
pp. 1262-1267 ◽  
Author(s):  
Frederick A. Meier ◽  
Bruce A. Jones

Abstract Context.—In a survey performed 4 years ago, testing venues doing only point-of-care testing (POCT) made up 78% of sites for patient testing licensed under federal regulations. Objectives.—To identify sources of POCT error, to present a classification of such errors, to suggest strategies to prevent errors, and to describe monitors that assess and reduce the frequency of errors. Design.—To identify sources of POCT error, large studies of error among US Federal Certificate of Waiver laboratories (CoWs) and practitioner-performed microscopy certificate holders were reviewed. To facilitate investigation and management of POCT error, a taxonomy of such errors (modified from a classification previously published by Gerald Kost) was used to identify 4 steps with error potential in each of the 3 phases (ie, preanalytic, analytic, and postanalytic) of the POCT process. To prevent observed POCT errors, 4 strategies are suggested: direct observation of instrument/method functionality, structured observation of method performance, proficiency testing/use of relevant test scenarios, and autonomation. To assess frequency of errors, a quartet of indices are introduced as detection monitors: order documentation, patient identification, specimen adequacy, and result integrity. Results.—Three sources of POCT error were identified: operator incompetence, nonadherence to test procedures, and use of uncontrolled reagents and equipment. Three other characteristics of many point-of-care tests amplify their risk of error: incoherent regulation, rapid availability of results, and the results' immediate therapeutic implications. Two members of the quartet of detection monitors, order documentation and specimen adequacy, are relatively difficult to measure and are controversial, but the other 2, patient identification and result integrity, are easier to assess and are relatively widely accepted. Conclusions.—Point-of-care testing errors are relatively common, their frequency is amplified by incoherent regulation, and their likelihood of affecting patient care is amplified by the rapid availability of POCT results and the results' immediate therapeutic implications. The modified Kost taxonomy offers a reasonable approach to the identification of POCT errors. Direct observation of test functionality, structured observation of test performance, and testing the competence of POCT operators, as well as autonomation of devices, are strategies to prevent such errors. In this context, we suggest monitoring POCT order documentation, patient identification, specimen integrity, and result reporting to detect errors in this sort of testing.


2002 ◽  
Vol 66 (5) ◽  
pp. 643-648 ◽  
Author(s):  
Taline Dadian ◽  
Kathy Guerink ◽  
Cynthia Olney ◽  
John Littlefield

2012 ◽  
Vol 4 (2) ◽  
pp. 220-226 ◽  
Author(s):  
Drew M. Keister ◽  
Daniel Larson ◽  
Julie Dostal ◽  
Jay Baglia

Abstract Background Despite the movement toward competency-based assessment by accrediting bodies in recent years, there is no consensus on how to best assess medical competence. Direct observation is a useful tool. At the same time, a comprehensive assessment system based on direct observation has been difficult to develop. Intervention We developed a system that translates data obtained from checklists of observed behaviors completed during educational activities, including direct observation of clinical care, into a graphic tool (the “radar graph”) usable for both formative and summative assessment. Using unique, observable behaviors to evaluate levels of competency on the Dreyfus scale, we assessed resident performance in 6 learning sites within our residency. Data are represented on a radar graph, which residents and faculty used to recognize both strengths and areas for growth to guide educational planning for the individual learner. Results Initial data show that the radar graphs have construct validity because the development process accurately reflects the desired construct, assessors were adequately trained, and the radar graphs demonstrated resident growth over time. A form completion rate of 90% for >1500 disseminated assessments suggests the feasibility of our process. Conclusions The radar graph is a promising tool for use in resident feedback and competency assessment. Further research is needed to determine the full utility of the radar graphs, including a better understanding of the tool's reliability and construct validity.


Sign in / Sign up

Export Citation Format

Share Document