scholarly journals Correlations between the scores of computerized adaptive testing, paper and pencil tests, and the Korean Medical Licensing Examination

Author(s):  
Mee Young Kim ◽  
Yoon Hwan Lee ◽  
Sun Huh

To evaluate the usefulness of computerized adaptive testing (CAT) in medical school, the General Examination for senior medical students was administered as a paper and pencil test (P&P) and using CAT. The General Examination is a graduate examination, which is also a preliminary examination for the Korean Medical Licensing Examination (KMLE). The correlations between the results of the CAT and P&P and KMLE were analyzed. The correlation between the CAT and P&P was 0.8013 (p=0.000); that between the CAT and P&P was 0.7861 (p=0.000); and that between the CAT and KMLE was 0.6436 (p=0.000). Six out of 12 students with an ability estimate below 0.52 failed the KMLE. The results showed that CAT could replace P&P in medical school. The ability of CAT to predict whether students would pass the KMLE was 0.5 when the criterion of the theta value was set at -0.52 that was chosen arbitrarily for the prediction of pass or failure.

Author(s):  
Kun Hwang

The purpose of this study was to examine the opinions of medical students and physician writers regarding the medical humanities as a subject and its inclusion in the medical school curriculum. Furthermore, we addressed whether an assessment test should be added to the National Medical Licensing Examination of Korea (KMLE). A total of 192 medical students at Inha University and 39 physician writers registered with the Korean Association of Physician Essayists and the Korean Association of Physician Poets participated in this study. They were asked to answer a series of questionnaires. Most medical students (59%) and all physician writers (100%) answered that the medical humanities should be included in the medical school curriculum to train good physicians. They thought that the KMLE did not currently include an assessment of the medical humanities (medical students 69%, physician writers 69%). Most physician writers (87%; Likert scale, 4.38 ± 0.78) felt that an assessment of the medical humanities should be included in the KMLE. Half of the medical students (51%; Likert scale, 2.51 ± 1.17) were against including it in the KMLE, which they would have to pass after several years of study. For the preferred field of assessment, medical ethics was the most commonly endorsed subject (medical students 59%, physician writers 39%). The most frequently preferred evaluation method was via an interview (medical students 45%, physician writers 33%). In terms of the assessment of the medical humanities and the addition of this subject to the KMLE, an interview-based evaluation should be developed.


Author(s):  
Dong Gi Seo ◽  
Jeongwook Choi

Purpose: Computerized adaptive testing (CAT) has been adopted in licensing examinations because it improves the efficiency and accuracy of the tests, as shown in many studies. This simulation study investigated CAT scoring and item selection methods for the Korean Medical Licensing Examination (KMLE). Methods: This study used a post-hoc (real data) simulation design. The item bank used in this study included all items from the January 2017 KMLE. All CAT algorithms for this study were implemented using the ‘catR’ package in the R program. Results: In terms of accuracy, the Rasch and 2-parametric logistic (PL) models performed better than the 3PL model. The ‘modal a posteriori’ and ‘expected a posterior’ methods provided more accurate estimates than maximum likelihood estimation or weighted likelihood estimation. Furthermore, maximum posterior weighted information and minimum expected posterior variance performed better than other item selection methods. In terms of efficiency, the Rasch model is recommended to reduce test length. Conclusion: Before implementing live CAT, a simulation study should be performed under varied test conditions. Based on a simulation study, and based on the results, specific scoring and item selection methods should be predetermined.


Author(s):  
Rachel B. Levine ◽  
Andrew P. Levy ◽  
Robert Lubin ◽  
Sarah Halevi ◽  
Rebeca Rios ◽  
...  

Purpose: United States (US) and Canadian citizens attending medical school abroad often desire to return to the US for residency, and therefore must pass US licensing exams. We describe a 2-day United States Medical Licensing Examination (USMLE) step 2 clinical skills (CS) preparation course for students in the Technion American Medical School program (Haifa, Israel) between 2012 and 2016.Methods: Students completed pre- and post-course questionnaires. The paired t-test was used to measure students’ perceptions of knowledge, preparation, confidence, and competence in CS pre- and post-course. To test for differences by gender or country of birth, analysis of variance was used. We compared USMLE step 2 CS pass rates between the 5 years prior to the course and the 5 years during which the course was offered.Results: Ninety students took the course between 2012 and 2016. Course evaluations began in 2013. Seventy-three students agreed to participate in the evaluation, and 64 completed the pre- and post-course surveys. Of the 64 students, 58% were US-born and 53% were male. Students reported statistically significant improvements in confidence and competence in all areas. No differences were found by gender or country of origin. The average pass rate for the 5 years prior to the course was 82%, and the average pass rate for the 5 years of the course was 89%.Conclusion: A CS course delivered at an international medical school may help to close the gap between the pass rates of US and international medical graduates on a high-stakes licensing exam. More experience is needed to determine if this model is replicable.


Author(s):  
Mi Kyoung Yim

Purpose: It aims to identify the effect of five variables to score of the Korean Medical Licensing Examinations (KMLE) for three consecutive years from 2011 to 2013. Methods: The number of examinees for each examination was 3,364 in 2011 3,177 in 2012, and 3,287 in 2013. Five characteristics of examinees were set as variables: gender, age, graduation status, written test result (pass or fail), and city of medical school. A regression model was established, with the score of a written test as a dependent variable and with examinees’ traits as variables. Results: The regression coefficients in all variables, except the city of medical school, were statistically significant. The variable’s effect in three examinations appeared in the following order: result of written test, graduation status, age, gender, and city of medical school. Conclusion: written test scores of the KMLE revealed that female students, younger examinees, and first-time examinees had higher performances.


Author(s):  
Eun Young Lim ◽  
Mi Kyoung Yim ◽  
Sun Huh

The aim of this study was to investigate respondents’ satisfaction with smart device-based testing (SBT), as well as its convenience and advantages, in order to improve its implementation. The survey was conducted among 108 junior medical students at Kyungpook National University School of Medicine, Korea, who took a practice licensing examination using SBT in September 2015. The survey contained 28 items scored using a 5-point Likert scale. The items were divided into the following three categories: satisfaction with SBT administration, convenience of SBT features, and advantages of SBT compared to paper-and-pencil testing or computer-based testing. The reliability of the survey was 0.95. Of the three categories, the convenience of the SBT features received the highest mean (M) score (M= 3.75, standard deviation [SD]= 0.69), while the category of satisfaction with SBT received the lowest (M= 3.13, SD= 1.07). No statistically significant differences across these categories with respect to sex, age, or experience were observed. These results indicate that SBT was practical and effective to take and to administer.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ling Wang ◽  
Heather S. Laird-Fick ◽  
Carol J. Parker ◽  
David Solomon

Abstract Background Medical students must meet curricular expectations and pass national licensing examinations to become physicians. However, no previous studies explicitly modeled stages of medical students acquiring basic science knowledge. In this study, we employed an innovative statistical model to characterize students’ growth using progress testing results over time and predict licensing examination performance. Methods All students matriculated from 2016 to 2017 in our medical school with USMLE Step 1 test scores were included in this retrospective cohort study (N = 358). Markov chain method was employed to: 1) identify latent states of acquiring scientific knowledge based on progress tests and 2) estimate students’ transition probabilities between states. The primary outcome of this study, United States Medical Licensing Examination (USMLE) Step 1 performance, were predicted based on students’ estimated probabilities in each latent state identified by Markov chain model. Results Four latent states were identified based on students’ progress test results: Novice, Advanced Beginner I, Advanced Beginner II and Competent States. At the end of the first year, students predicted to remain in the Novice state had lower mean Step 1 scores compared to those in the Competent state (209, SD = 14.8 versus 255, SD = 10.8 respectively) and had more first attempt failures (11.5% versus 0%). On regression analysis, it is found that at the end of the first year, if there was 10% higher chance staying in Novice State, Step 1 scores will be predicted 2.0 points lower (95% CI: 0.85–2.81 with P < .01); while 10% higher chance in Competent State, Step 1scores will be predicted 4.3 points higher (95% CI: 2.92–5.19 with P < .01). Similar findings were also found at the end of second year medical school. Conclusions Using the Markov chain model to analyze longitudinal progress test performance offers a flexible and effective estimation method to identify students’ transitions across latent stages for acquiring scientific knowledge. The results can help identify students who are at-risk for licensing examination failure and may benefit from targeted academic support.


2021 ◽  
Vol 108 (Supplement_6) ◽  
Author(s):  
S J K Chong ◽  
L Mortimer ◽  
C Quick ◽  
L West ◽  
G Khera

Abstract Aim A UK teaching hospital expanded their established education fellow programme to the General Surgery department to assist with departmental teaching of third-year medical students from the affiliated medical school on clinical placement. Teaching on ward rounds, bedside teaching and clinical tutorials were three areas identified as requiring improvement based on previous student feedback. Observation of Upper and Lower Gastrointestinal (GI) malignancy multi-disciplinary meetings (MDMs) via Microsoft Teams was also introduced as a new teaching initiative. Method Four post-foundation training education fellows were allocated on alternating weeks to supervise third-year medical students on upper and lower GI ward rounds and during GI MDM observation, conduct bedside teaching and facilitate blended-learning clinical tutorials in accordance with the 2020 GMC Medical Licensing Assessment curriculum. A mixed-methods survey was sent to students after surgical placement and the results compared to student feedback from previous years. Results 31 out of 52 students (60%) on placement responded. 100% rated the fellow-led clinical tutorials as “excellent”. 87% of respondents rated the upper and lower GI ward rounds as either “excellent” (52%) or “good” (35%). All respondents rated the implementation of education fellows as either “very helpful” (94%) or “helpful” (6%) for their learning. Most students rated the MDM observation initiative as “good” (36%) or “average” (36%). Conclusions Implementation of education fellows on third-year medical student placements in General Surgery facilitates self-reported student learning and is associated with a drastically improved student learning experience. More work is required to develop GI MDM-based teaching to improve student learning experiences from MDMs.


Sign in / Sign up

Export Citation Format

Share Document