scholarly journals ITEM ANALYSIS AND PEER-REVIEW EVALUATION OF SPECIFIC HEALTH PROBLEMS AND APPLIED RESEARCH BLOCK EXAMINATION

Author(s):  
Novi Maulina ◽  
Rima Novirianthy

Background: Assessment and evaluation for students is an essential component of teaching and learning process. Item analysis is the technique of collecting, summarizing, and using students’ response data to assess the quality of the Multiple Choice Question (MCQ) test by measuring indices of difficulty and discrimination, also distracter efficiency. Peer review practices improve quality of assessment validity in evaluating student performance.Method: We analyzed 150 student’s responses for 100 MCQs in Block Examination for its difficulty index (p), discrimination index (D) and distractor efficiency (DE) using Microsoft excel formula. The Correlation of p and D was analyzed using Spearman correlation test by SPSS 23.0. The result was analyzed to evaluate the peer-review strategy.Results: The median of difficulty index (p) was 54% or within the range of excellent level (p 40-60%) and the mean of discrimination index (D) was 0.24 which is reasonably good. There were 7 items with excellent p (40–60%) and excellent D (≥0.4). Nineteen of items had excellent discrimination index (D≥0.4). However,there were 9 items with negative discrimination index and 30 items with poor discrimination index, which should be fully revised. Forty-two of items had 4 functioning distracters (DE 0%) which suggested the teacher to be more precise and carefully creating the distracters.Conclusion: Based on item analysis, there were items to be fully revised. For better test quality, feedback and suggestions for the item writer should also be performed as a part of peer-review process on the basis of item analysis.

Author(s):  
Abhijeet S. Ingale ◽  
Purushottam A. Giri ◽  
Mohan K. Doibale

Background: Item analysis is the process of collecting, summarizing and using information from students’ response to assess the quality of test items. However it is said that MCQs emphasize recall of factual information rather than conceptual understanding and interpretation of concepts. There is more to writing good MCQs than writing good questions. The objectives of the study was to assess the item and test quality of multiple choice questions and to deal with the learning difficulties of students, identify the low achievers in the test. Methods: The hundred MBBS students from Government medical college were examined. A test comprising of thirty MCQs was administered. All items were analysed for Difficulty Index, Discrimination Index and Distractor Efficiency. Data entered in MS Excel 2007 and SPSS 21 analysed with statistical test of significance. Results: Majority 80% items difficulty index is within acceptable range. 63% items showed excellent discrimination Index. Distractor efficiency was overall satisfactory. Conclusions: Multiple choice questions with average difficulty and also having high discriminating power with good distracter efficiency should be incorporated into student’s examination. 


Author(s):  
Manju K. Nair ◽  
Dawnji S. R.

Background: Carefully constructed, high quality multiple choice questions can serve as effective tools to improve standard of teaching. This item analysis was performed to find the difficulty index, discrimination index and number of non functional distractors in single best response type questions.Methods: 40 single best response type questions with four options, each carrying one mark for the correct response, was taken for item analysis. There was no negative marking. The maximum marks was 40. Based on the scores, the evaluated answer scripts were arranged with the highest score on top and the least score at the bottom. Only the upper third and lower third were included. The response to each item was entered in Microsoft excel 2010. Difficulty index, Discrimination index and number of non functional distractors per item were calculated.Results: 40 multiple choice questions and 120 distractors were analysed in this study. 72.5% items were good with a difficulty index between 30%-70%. 25% items were difficult and 2.5% items were easy. 27.5% items showed excellent discrimination between high scoring and low scoring students. One item had a negative discrimination index (-0.1). There were 9 items with non functional distractors.Conclusions: This study emphasises the need for improving the quality of multiple choice questions. Hence repeated evaluation by item analysis and modification of non functional distractors may be performed to enhance standard of teaching in Pharmacology.


2000 ◽  
Vol 8 ◽  
pp. 48 ◽  
Author(s):  
Orlan Lee

The Research Assessment Exercises (RAEs) in hugely expanded universities in Britain and Hong Kong attempt mammoth scale ratings of "quality of research." If peer review on that scale is feasible for "quality of research," is it less so for "quality of teaching"? The lessons of the Hong Kong Teaching and Learning Quality Process Reviews (TLQPRs), of recent studies on the influence of grade expectation and workload on student ratings, of attempts to employ agency theory both to improve teaching quality and raise student ratings, and of institutional attempts to refine the peer review process, all suggest that we can "put teaching on the same footing as research" and include professional regard for teaching content and objectives, as well as student ratings of effectiveness and personality appeal, in the process.


2017 ◽  
Author(s):  
Abdulaziz Alamri ◽  
Omer Abdelgadir Elfaki ◽  
Karimeldin A Salih ◽  
Suliman Al Humayed ◽  
Fatmah Mohammed Ahmad Althebat ◽  
...  

BACKGROUND Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. OBJECTIVE The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. METHODS Design: Cross sectional Study design Setting, Materials and methods Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. RESULTS Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97 CONCLUSIONS Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.


2019 ◽  
Vol 7 (1) ◽  
pp. 499-511
Author(s):  
April Marqueses Obon ◽  
Kristel Anne M. Rey

Introduction:Multiple Choice Questions (MCQs) is used extensively as a test format in nursing education. However, making MCQs still remains a challenge to educators. To avoid issues about its quality, this should undergo item analysis. Thus, the study evaluated item and test quality using difficulty index (DIF) and discrimination indices (DI), with distractor efficiency (DE); determined the reliability usingKuder-Richardson 20 coefficients (KR20); and identified which valid measure was developed. Methodology: The study was conducted among 41 level two nursing students in the College of Nursing. The qualifying examination comprised of 194 MCQs. Data were entered in Microsoft Excel 2010 and SPSS22 and were analyzed.  Results: According to DIF, out of 194 items, 115 (59.53%) had right difficultyand 79 (40.7%) were difficult. Regarding DI, 17 (8.8%) MCQs were considered very gooditems to discriminate the low and high performer students. While 21 (10.8%), 32 (16.5%), 24 (12.4%), and 100 (51.5%) demonstrated good, fair quality, potentially poor, and potentially very pooritems, respectively. On the other hand, the number of items that had 100% distractor effectiveness is 57 (29.4%), as 65 (33.5%), 49 (25.3%), and 23 (11.9%) revealed 66.6%, 33.3% and 0%, respectively. The reliability of the test using KR20 is 0.9, suggesting that the test is highly reliable with considered good internal consistency. After careful analysis of each item, 55 (28.35%) items were retained without revisions. Further, the stem of the 24 (12.37%) items, the distractors of the 66 (34.02%) items and both the stem and distractors of 46 (23.71%) items were modified, and 3 (1.55%) items were removed. Discussion:The researcher recommends doing an analysis between upper and lower scorers and its relationship to DE.  For future study, it will be beneficial to explore other factors like student’s ability, quality of instructions, and number of students in relation to quality of MCQs.  


Kilat ◽  
2018 ◽  
Vol 7 (1) ◽  
pp. 15-23
Author(s):  
Redaksi Tim Jurnal

This paper discusses the design of an application for the analysis on items using quantitative method with the classical approach as a media evaluation for teachers in determining the quality of questions in an exam or a test. The classical approach is the process of examining each item in the test based on the students' answers for calculating the Difficulty Index (Dif I) and the Discrimination Index (DI) of each question. The Difficulty Index is the ratio between the number of students who answered an item (or a question) correctly and the total number of students who participated in the test. The Discrimination Index is a measure of how well an item (or a question) is able to distinguish between clever students and students who are less clever. The data used in this research was obtained from class VII SMPN 10 (Junior High School) Makassar, especially on math (subject). The data includes math questions that have been tested, the answers given bythe students of class VII, and the data about the students. The results of the item analysis will provide information whether a question can be accepted, corrected or to be rejected. The application generated from this research is expected to be able to offer assistance for the teacher in performing item analysis. The results of the analysis can be used to help teachers in compiling more qualified questions in the future.


Author(s):  
Suryakar Vrushali Prabhunath ◽  
Surekha T. Nemade ◽  
Ganesh D. Ghuge

Introduction: Multiple Choice Questions (MCQs) is one of the most preferred tool of assessment in medical education as a part of formative as well as summative assessment. MCQ performance as an assessment tool can be statistically analysed by Item analysis. Thus, aim of this study is to assess the quality of MCQs by item analysis and identify the valid test items to be included in the question bank for further use. Materials and methods: Formative assessment of Ist MBBS students was carried out with 40 MCQs as a part of internal examination in Biochemistry. Item analysis was done by calculating Difficulty index (P), Discrimination index (d) and number of Non- functional distractors. Results: Difficulty index (P) of 65% (26) items was well within acceptable range, 7.5% (3) items were too difficult whereas 27.5% (11) items were in the category of too easy. Discrimination Index (d) of 70% (28) items fell in recommended category whereas 10% (4) items were with acceptable, and 20% (8) were with poor Discrimination index. Out of 120 distractors 88.33% (106) were functional distractors and 11.66% (14) were non-functional distractors. After considering difficulty index, discrimination index and distractor effectiveness, 42.5% (17) items were found ideal to be included in the question bank. Conclusion: Item analysis remains an essential tool to be practiced regularly to improve the quality of the assessment methods as well as a tool for obtaining feedback for the instructors. Key Words: Difficulty index, Discrimination index, Item analysis, Multiple choice questions, Non-functional distractors


Author(s):  
Richa Garg ◽  
Vikas Kumar ◽  
Jyoti Maria

Background: Assessment is a dominant motivator to direct and drive students learning. Different methods of assessment are used to assess medical knowledge in undergraduate medical education. Multiple choice questions (MCQs) are being used increasingly due to their higher reliability, validity, and ease of scoring. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), and distracter efficiency (DE).Methods: Students of second year MBBS appeared in a formative assessment test, that was comprised of 50 “One best response type” MCQs of 50 marks without negative marking. All MCQs were having single stem with four options including, one being correct answer and other three incorrect alternatives (distracter). Three question paper sets were prepared by disorganizing sequence of questions. One of the three paper sets was given to each student to avoid copying from neighboring students. Total 50 MCQs and 150 distracters were analyzed and indices like DIF I, DI, and DE were calculated.Results: Total Score of 87 students ranged from 17 to 48 (out of total 50). Mean for difficulty index (DIF I) (%) was 71.6+19.4. 28% MCQs were average and “recommended” (DIF I 30-70%). Mean for discrimination index (DI) was 0.3+0.17. 16% MCQs were “good” and 50% MCQs were in “excellent” criteria, while rests of the MCQs were “discard/poor” according to DI criteria. Mean for distracter efficiency (DE) (%) was 63.4+33.3. 90% of the items were having DE from 100 to 33%. It was found that MCQs with lower difficulty index (<70) were having higher distracter efficiency (93.8% vs. 6.2%, p=0.004).Conclusions: Item analysis provided necessary data for improvement in question formulation and helped in revising and improving the quality of items and test also. Questions having lower difficulty index (<70) were significantly associated with higher discrimination index (>0.15) and higher distractor efficiency.


2021 ◽  
Vol 8 (8) ◽  
Author(s):  
Ernestine Wirngo Tani

<p>Evaluation is an essential facet of education and plays a significant role in giving feedbacks to stakeholders. Pedagogy is not complete without learners’ assessment and. Objective tests are used extensively as test format in primary school. However, conception and making remains a challenge to most teachers. This cast doubts over the quality. To mitigate the issues about its quality, each test format should undergo item analysis or task analysis. This study sets out to evaluate item and test quality of a national achievement test of English language using difficulty index (DIF), and discrimination indices (DI); to identify which task were appropriate for the respective levels. The study made use of data collected by the ministry of basic education aimed at measuring the true score of their learners in order to plan new pedagogic tools for improving the quality of reading and mathematics amongst primary school pupils. The Classical Test Theory (CTT) that utilizes two main statistics: the item difficulty index and the discrimination index were employed. Through an ex-post factor analysis results obtained showed that the national achievement test was easy, thus depicting the good performance of pupils’ whereas in reality it is the reverse. About 90% of the pupils got Items that were virtually correct consequently useless for discriminating among pupils. Task like Measurement and size for class three, addition and subtraction and familiar word identification for class five should be completely discarded as their DIF stood at 1.00. Given that, quality control is important for test development. Teachers are recommended to perform item analysis and to synchronize classroom instruction with test items to achieve instructional validity. </p><p> </p><p><strong> Article visualizations:</strong></p><p><img src="/-counters-/edu_01/0888/a.php" alt="Hit counter" /></p>


2017 ◽  
Vol 2 (2) ◽  
pp. 97-104
Author(s):  
Desrin Lebagi ◽  
S. Sumardi ◽  
S. Sudjoko

One of essential phases in language learning is measurement. Test as a tool of measurement process must then be well constructed. The quality of test itself can be determined through test item analysis. However, in some occasions, teachers tend to ignore test item analysis because of time limitation and other responsibilities.  Referring to this problem, this research aimed to describe the quality of test items including the difficulty index, the discrimination index, the distractor index, and the reliability of the test and the Washback of teacher-made test on students’ motivation in learning English. It was conducted at Gamaliel Elementary School in academic year of 2016-2017. This case study utilized purposive sampling. In collecting the data, the researcher used interview, observation, and document analysis as the techniques of collecting data. The informants were an English teacher and students of Gamaliel Elementary School. The documents were students’ answer sheets. In analyzing test items, the researchers used ITEMAN program. The result of this study shows that the teacher-made test can be classified in good test. The test brings both positive and negative Washback in students’ motivation in learning. Therefore, it is recommended for the teacher to conduct test analysis as a way of evaluating and improving his teaching and learning and test itself as well as to encourage the students to study even though they are not confronted with a test.


Sign in / Sign up

Export Citation Format

Share Document