scholarly journals ANALYSIS OF PROGRESS TEST RESULTS IN MEDICAL FACULTY STUDENTS

Author(s):  
Ade Pryta Romanauli Simaremare

 Background: Assessment of learning outcomes is an important evaluation material to show how the teaching and learning process has been carried out. It can be obtained from formative and summative assessment, then students are given feedback from these results. One method for formative evaluation is a progress test. During the implementation at the HKBP Nommensen University Faculty of Medicine, there had never been an analysis of the results of the Progress Test activity. This study was conducted for analysing of the results of the Progress Test held in the even semester of the 2018/2019 academic year. Methods: This study used an observational descriptive design with cross sectional method. The sample were all students of the Faculty of Medicine who were actively studying in the even semester of the 2018/2019 Academic Year totaling 215 subjects. Item analysis was done on the questions for basic and clinical medicine category by the level of difficulty and the discrimination index based on students’ study period. Results: Students passing rate that attended the progress test in this study were very low. However, the score achieved by the students increased along with the length of students’ study period. Item analysis resulted the difficulty level majority at the medium level, and the discrimination index majority at the poor level for both the basic and clinical medicine science category. Conclusion: Progress testing can be used as a tool to help curriculum designer see the development of students’ knowledge skills both individually and in population.

2021 ◽  
Vol 13 (2) ◽  
pp. 1425-1431
Author(s):  
Andi Rahman

The current Covid-19 pandemic has had many effects on human life globally, including the implementation of education. This study aimed to obtain the impact of the Covid-19 pandemic on learning outcomes in higher education. The research method used is a cross-sectional study. The data were taken from the test results at the end of the lecture, observations, and interviews. The research was conducted at the University of Muhammadiyah Lampung, IPDN Jatinangor Campus, and the Ahmad Dahlan Institute of Technology and Business, with 120 students participating. The data analysis technique used the percentage technique and cross-tabulation. The study results concluded that student learning outcomes decreased in the 2020-2021 academic year compared to the 2019-2020 academic year. The decline in learning outcomes includes knowledge, skills, and psychology. This finding has implications for the understanding of education personnel regarding online teaching and learning design during the Covid-19 pandemic.


Author(s):  
Ajeet Kumar Khilnani ◽  
Rekha Thaddanee ◽  
Gurudas Khilnani

<p class="abstract"><strong>Background:</strong> Multiple choice questions (MCQs) are routinely used for formative and summative assessment in medical education. Item analysis is a process of post validation of MCQ tests, whereby items are analyzed for difficulty index, discrimination index and distractor efficiency, to obtain a range of items of varying difficulty and discrimination indices. This study was done to understand the process of item analysis and analyze MCQ test so that a valid and reliable MCQ bank in otorhinolaryngology is developed.</p><p class="abstract"><strong>Methods:</strong> 158 students of 7<sup>th</sup> Semester were given an 8 item MCQ test. Based on the marks achieved, the high achievers (top 33%, 52 students) and low achievers (bottom 33%, 52 students) were included in the study. The responses were tabulated in Microsoft Excel Sheet and analyzed for difficulty index, discrimination index and distractor efficiency.  </p><p class="abstract"><strong>Results:</strong> The mean (SD) difficulty index (Diff-I) of 8 item test was 61.41% (11.81%). 5 items had a very good difficulty index (41% to 60%), while 3 items were easy (Diff-I &gt;60%). There was no item with Diff-I &lt;30%, i.e. a difficult item, in this test. The mean (SD) discrimination index (DI) of the test was 0.48 (0.15), and all items had very good discrimination indices of more than 0.25. Out of 24 distractors, 6 (25%) were non-functional distractors (NFDs). The mean (SD) distractor efficiency (DE) of the test was 74.62% (23.79%).</p><p class="abstract"><strong>Conclusions:</strong> Item analysis should be an integral and regular activity in each department so that a valid and reliable MCQ question bank is developed.</p>


Author(s):  
Amit P. Date ◽  
Archana S. Borkar ◽  
Rupesh T. Badwaik ◽  
Riaz A. Siddiqui ◽  
Tanaji R. Shende ◽  
...  

Background: Multiple choice questions (MCQs) are a common method for formative and summative assessment of medical students. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), distracter efficiency (DE). The objective of this study was to assess the quality of MCQs currently in use in pharmacology by item analysis and develop a MCQ bank with quality items.Methods: This cross-sectional study was conducted in 148 second year MBBS students at NKP Salve institute of medical sciences from January 2018 to August 2018. Forty MCQs twenty each from the two term examination of pharmacology were taken for item analysis A correct response to an item was awarded one mark and each incorrect response was awarded zero. Each item was analyzed using Microsoft excel sheet for three parameters such as DIF I, DI, and DE.Results: In present study mean and standard deviation (SD) for Difficulty index (%) Discrimination index (%) and Distractor efficiency (%) were 64.54±19.63, 0.26±0.16 and 66.54±34.59 respectively. Out of 40 items large number of MCQs has acceptable level of DIF (70%) and good in discriminating higher and lower ability students DI (77.5%). Distractor efficiency related to presence of zero or 1 non-functional distrator (NFD) is 80%.Conclusions: The study showed that item analysis is a valid tool to identify quality items which regularly incorporated can help to develop a very useful, valid and a reliable question bank.


2021 ◽  
Vol 4 (2) ◽  
pp. 178-186
Author(s):  
Budi Mulyati

The purpose of this study was to analyze the items in the form of essay items, given as a final exam in subject of introductory accounting 1. This question was given to nineteen students in semester 1 of the 2020-2021 academic year. This study used a descriptive method with a quantitative approach. For the purposes of analysis, the item analysis technique was used, which consisted of an analysis of the level of difficulty of the items and the analysis of the differentiating power of the items. Based on the results of the analysis, the results obtained that the questions made had an index of difficulty level as an easy question of 50% and an average question of 50%. And based on the results of the analysis of the differentiating power index, the questions included as questions that needed to be revised were 33.3% and questions that were not good were 67.7%.


2020 ◽  
Vol 19 (1) ◽  
Author(s):  
Surajit Kundu ◽  
Jaideo M Ughade ◽  
Anil R Sherke ◽  
Yogita Kanwar ◽  
Samta Tiwari ◽  
...  

Background: Multiple-choice questions (MCQs) are the most frequently accepted tool for the evaluation of comprehension, knowledge, and application among medical students. In single best response MCQs (items), a high order of cognition of students can be assessed. It is essential to develop valid and reliable MCQs, as flawed items will interfere with the unbiased assessment. The present paper gives an attempt to discuss the art of framing well-structured items taking kind help from the provided references. This article puts forth a practice for committed medical educators to uplift the skill of forming quality MCQs by enhanced Faculty Development programs (FDPs). Objectives: The objective of the study is also to test the quality of MCQs by item analysis. Methods: In this study, 100 MCQs of set I or set II were distributed to 200 MBBS students of Late Shri Lakhiram Agrawal Memorial Govt. Medical College Raigarh (CG) for item analysis for quality MCQs. Set I and Set II were MCQs which were formed by 60 medical faculty before and after FDP, respectively. All MCQs had a single stem with three wrong and one correct answers. The data were entered in Microsoft excel 2016 software to analyze. The difficulty index (Dif I), discrimination index (DI), and distractor efficiency (DE) were the item analysis parameters used to evaluate the impact on adhering to the guidelines for framing MCQs. Results: The mean calculated difficulty index, discrimination index, and distractor efficiency were 56.54%, 0.26, and 89.93%, respectively. Among 100 items, 14 items were of higher difficulty level (DIF I < 30%), 70 were of moderate category, and 16 items were of easy level (DIF I > 60%). A total of 10 items had very good DI (0.40), 32 had recommended values (0.30 - 0.39), and 25 were acceptable with changes (0.20 - 0.29). Of the 100 MCQs, there were 27 MCQs with DE of 66.66% and 11 MCQs with DE of 33.33%. Conclusions: In this study, higher cognitive-domain MCQs increased after training, recurrent-type MCQ decreased, and MCQ with item writing flaws reduced, therefore making our results much more statistically significant. We had nine MCQs that satisfied all the criteria of item analysis.


2017 ◽  
Author(s):  
Abdulaziz Alamri ◽  
Omer Abdelgadir Elfaki ◽  
Karimeldin A Salih ◽  
Suliman Al Humayed ◽  
Fatmah Mohammed Ahmad Althebat ◽  
...  

BACKGROUND Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. OBJECTIVE The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. METHODS Design: Cross sectional Study design Setting, Materials and methods Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. RESULTS Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97 CONCLUSIONS Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.


Author(s):  
Novi Maulina ◽  
Rima Novirianthy

Background: Assessment and evaluation for students is an essential component of teaching and learning process. Item analysis is the technique of collecting, summarizing, and using students’ response data to assess the quality of the Multiple Choice Question (MCQ) test by measuring indices of difficulty and discrimination, also distracter efficiency. Peer review practices improve quality of assessment validity in evaluating student performance.Method: We analyzed 150 student’s responses for 100 MCQs in Block Examination for its difficulty index (p), discrimination index (D) and distractor efficiency (DE) using Microsoft excel formula. The Correlation of p and D was analyzed using Spearman correlation test by SPSS 23.0. The result was analyzed to evaluate the peer-review strategy.Results: The median of difficulty index (p) was 54% or within the range of excellent level (p 40-60%) and the mean of discrimination index (D) was 0.24 which is reasonably good. There were 7 items with excellent p (40–60%) and excellent D (≥0.4). Nineteen of items had excellent discrimination index (D≥0.4). However,there were 9 items with negative discrimination index and 30 items with poor discrimination index, which should be fully revised. Forty-two of items had 4 functioning distracters (DE 0%) which suggested the teacher to be more precise and carefully creating the distracters.Conclusion: Based on item analysis, there were items to be fully revised. For better test quality, feedback and suggestions for the item writer should also be performed as a part of peer-review process on the basis of item analysis.


Author(s):  
Laís Büttner Sartor ◽  
Luana Lanzarini da Rosa ◽  
Kristian Madeira ◽  
Maria Laura Rodrigues Uggioni ◽  
Olavo Franco Ferreira Filho ◽  
...  

Abstract: Introduction: The Progress Test was created to address the necessity of measuring the level of knowledge consolidation along the years of Medical school. The test is administered periodically to all students in a curriculum, assessing the student’s cognitive growth throughout their journey at undergraduate level. In addition to assessing the student individually, the test evaluates the institution, showing in which areas its curriculum base should be improved. The aim is to assess the Universidade do Extremo Sul Catarinense student’s perception of the Progress Test. Methods: A cross-sectional study was performed. Data was collected through questionnaires created by the researchers and applied to medical students - the ones who took the Progress Test at least once - from October 15th to November 30th, 2018. The statistical analysis was performed with a 95% confidence interval. Results: A response rate of 70.41% was obtained, with a total of 424 questionnaires being included in the research. Demographic data showed a predominance of female gender (60,4%) and white ethnicity (96,2%) in the population and a mean age of 23 years. In all semesters (early, intermediate and final ones) the participants knew the goal of the progress test, and most students consider it important. It was also observed that the majority of the students considered clinical surgery and collective health as their worst performance in the test. In clinical medicine, pediatrics, and gynecology-obstetrics, the students of the intermediate and final semesters were satisfied with their level of knowledge. “To evaluate the student’s progress/performance” was highlighted as the most positive point. Among the negative ones “decrease the number of questions so the test is not as extensive” was emphasized. Conclusion: The students of the sample consider the Progress Test important and know about its purpose. The final third of the Medical School is the one who feels most prepared to face the test. The main fields to which the students attributed their worst performance were clinical surgery and collective health. Regarding clinical medicine, pediatrics, gynecology, and obstetrics the students were satisfied with their knowledge.


Author(s):  
Mitayani Purwoko ◽  
Trisnawati Mundijo

Background: Student’s cognitive ability could be assessed using MCQ. The aim of this study was to evaluate the quality of MCQ as an assessment method in Medical Faculty of Muhammadiyah University Palembang. Method: This study was designed as a cross sectional descriptive observational study. Sample was MCQ assessment in Genetics and Molecular Biology Module for academic year 2013/2014 until 2015/2016 for total 299 questions. Item analysis was done manually.Results: The item analysis showed that 61.2% questions were recall-type question. This situation showed that the construction of the question was not good and only testing the lower cognitive area. There was 45.2% ideal question with 30-70% difficulty index and 23.1% questions whom distractor efficiency was 100%. Half of the questions (56.2%) should  be revised. This revision-needed questions distributed equally into easy, ideal, and hard level of difficulty. Revision-needed questions had lower distractor efficiency mean compared to good questions. Conclusion: MCQ as an assessment method do not reach maximum target yet because there were many questions that should be revised. Faculty should enhance the development of the lecturer in writing the good MCQs. 


Sign in / Sign up

Export Citation Format

Share Document