faculty ratings
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 0)

H-INDEX

9
(FIVE YEARS 0)

2019 ◽  
Vol 21 (1) ◽  
pp. 145-148
Author(s):  
Matthew Hall ◽  
Jason Lewis ◽  
Joshua Joseph ◽  
Andrew Ketterer ◽  
Carlo Rosen ◽  
...  

The Standardized Video Interview (SVI) was developed by the Association of American Medical Colleges to assess professionalism, communication, and interpersonal skills of residency applicants. How SVI scores compare with other measures of these competencies is unknown. The goal of this study was to determine whether there is a correlation between the SVI score and both faculty and patient ratings of these competencies in emergency medicine (EM) applicants. This was a retrospective analysis of a prospectively collected dataset of medical students. Students enrolled in the fourth-year EM clerkship at our institution and who applied to the EM residency Match were included. We collected faculty ratings of the students’ professionalism and patient care/ communication abilities as well as patient ratings using the Communication Assessment Tool (CAT) from the clerkship evaluation forms. Following completion of the clerkship, students applying to EM were asked to voluntarily provide their SVI score to the study authors for research purposes. We compared SVI scores with the students’ faculty and patient scores using Spearman’s rank correlation. Of the 43 students from the EM clerkship who applied in EM during the 2017-2018 and 2018-2019 application cycles, 36 provided their SVI scores. All 36 had faculty evaluations and 32 had CAT scores available. We found that SVI scores did not correlate with faculty ratings of professionalism (rho = 0.09, p = 0.13), faculty assessment of patient care/communication (rho = 0.12, p = 0.04), or CAT scores (rho = 0.11, p = 0.06). Further studies are needed to validate the SVI and determine whether it is indeed a predictor of these competencies in residency.


2019 ◽  
Vol 51 (6) ◽  
pp. 483-499
Author(s):  
Susan Rosenthal ◽  
Stefani Russo ◽  
Katherine Berg ◽  
Joseph Majdan ◽  
Jennifer Wilson ◽  
...  

Background and Objectives: New standards announced in 2017 could increase the failure rate for Step 2 Clinical Skills (CS). The purpose of this study was to identify student performance metrics associated with risk of failing. Methods: Data for 1,041 graduates of one medical school from 2014 through 2017 were analyzed, including 30 (2.9%) failures. Metrics included Medical College Admission Test, United States Medical Licensing Examination Step 1, and clerkship National Board of Medical Examiners (NBME) Subject Examination scores; faculty ratings in six clerkships; and scores on an objective structured clinical examination (OSCE). Bivariate statistics and regression were used to estimate risk of failing. Results: Those failing had lower Step 1 scores, NBME scores, faculty ratings, and OSCE scores (P<.02). Students with four or more low ratings were more likely to fail compared to those with fewer low ratings (relative risk [RR], 12.76, P<.0001). Logistic regression revealed other risks: low surgery NBME scores (RR 3.75, P=.02), low pediatrics NBME scores (RR 3.67, P=.02), low ratings in internal medicine (RR 3.42, P=.004), and low OSCE Communication/Interpersonal Skills (RR 2.55, P=.02). Conclusions: Certain medical student performance metrics are associated with risk of failing Step 2 CS. It is important to clarify these and advise students accordingly.


2017 ◽  
Vol 9 (5) ◽  
pp. 605-610
Author(s):  
L. Jane Easdown ◽  
Marsha L. Wakefield ◽  
Matthew S. Shotwell ◽  
Michael R. Sandison

ABSTRACT Background  Faculty members need to assess resident performance using the Accreditation Council for Graduate Medical Education Milestones. Objective  In this randomized study we used an objective structured clinical examination (OSCE) around the disclosure of an adverse event to determine whether use of a checklist improved the quality of milestone assessments by faculty. Methods  In 2013, a total of 20 anesthesiology faculty members from 3 institutions were randomized to 2 groups to assess 5 videos of trainees demonstrating advancing levels of competency on the OSCE. One group used milestones alone, and the other used milestones plus a 13-item checklist with behavioral anchors based on ideal performance. We classified faculty ratings as either correct or incorrect with regard to the competency level demonstrated in each video, and then used logistic regression analysis to assess the effect of checklist use on the odds of correct classification. Results  Thirteen of 20 faculty members rated assessing performance using milestones alone as difficult or very difficult. Checklist use was associated with significantly greater odds of correct classification at entry level (odds ratio [OR] = 9.2, 95% confidence interval [CI] 4.0–21.2) and at junior level (OR = 2.7, 95% CI 1.3–5.7) performance. For performance at other competency levels checklist use did not affect the odds of correct classification. Conclusions  A majority of anesthesiology faculty members reported difficulty with assessing a videotaped OSCE of error disclosure using milestones as primary assessment tools. Use of the checklist assisted in correct assessments at the entry and junior levels.


2010 ◽  
Vol 34 (4) ◽  
pp. 213-216 ◽  
Author(s):  
John A. McNulty ◽  
Gregory Gruener ◽  
Arcot Chandrasekhar ◽  
Baltazar Espiritu ◽  
Amy Hoyt ◽  
...  

Student evaluations of faculty are important components of the medical curriculum and faculty development. To improve the effectiveness and timeliness of student evaluations of faculty in the physiology course, we investigated whether evaluations submitted during the course differed from those submitted after completion of the course. A secure web-based system was developed to collect student evaluations that included numerical rankings ( 1–5) of faculty performance and a section for comments. The grades that students received in the course were added to the data, which were sorted according to the time of submission of the evaluations and analyzed by Pearson's correlation and Student's t-test. Only 26% of students elected to submit evaluations before completion of the course, and the average faculty ratings of these evaluations were highly correlated [ r( 14 ) = 0.91] with the evaluations submitted after completion of the course. Faculty evaluations were also significantly correlated with the previous year [ r( 14 ) = 0.88]. Concurrent evaluators provided more comments that were statistically longer and subjectively scored as more “substantive.” Students who submitted their evaluations during the course and who included comments had significantly higher final grades in the course. In conclusion, the numeric ratings that faculty received were not influenced by the timing of student evaluations. However, students who submitted early evaluations tended to be more engaged as evidenced by their more substantive comments and their better performance on exams. The consistency of faculty evaluations from year to year and concurrent versus at the end of the course suggest that faculty tend not to make significant adjustments to student evaluations.


2010 ◽  
Vol 85 ◽  
pp. S25-S28 ◽  
Author(s):  
Jennifer R. Kogan ◽  
Brian J. Hess ◽  
Lisa N. Conforti ◽  
Eric S. Holmboe

2009 ◽  
Vol 1 (2) ◽  
pp. 273-277 ◽  
Author(s):  
Satish Krishnamurthy ◽  
Usha Satish ◽  
Tina Foster ◽  
Siegfried Streufert ◽  
Mantosh Dewan ◽  
...  

Abstract Rationale Accurate assessment of resident competency is a fundamental requisite to assure the training of physicians is adequate. In surgical disciplines, structured tests as well as ongoing evaluation by faculty are used for evaluating resident competency. Although structured tests evaluate content knowledge, faculty ratings are a better measure of how that knowledge is applied to real-world problems. In this study, we sought to explore the performance of surgical residents in a simulation exercise (strategic management simulations [SMS]) as an objective surrogate of real-world performance. Methods Forty surgical residents participated in the SMS simulation that entailed decision making in a real-world−oriented task situation. The task requirements enable the assessment of decision making along several parameters of thinking under both crisis and noncrisis situations. Performance attributes include “simpler” measures of competency (activity level), intermediate categories (information management and emergency responses) to complex measures (breadth of approach and strategy). Scores obtained in the SMS were compared with the scores obtained on the American Board of Surgery In-Training Examination (ABSITE). Results The data were intercorrelated and subjected to a multiple regression analysis with ABSITE as the dependent variable and simulation scores as independent variables. Using a 1-tail test analysis, only 3 simulation variables correlated with performance on ABSITE at the .01 level (ie, basic activity, focused activity, task orientation). Other simulation variables showed no meaningful relationships to ABSITE scores at all. Conclusions The more complex real-world−oriented decision-making parameters on SMS did not correlate with ABSITE scores. We believe that techniques such as the SMS, which focus on critical thinking, complement assessment of medical knowledge using ABSITE. The SMS technique provides an accurate measure of real-world performance and provides objective validation of faculty ratings.


Sign in / Sign up

Export Citation Format

Share Document