scholarly journals THE QUALITY OF TEACHER-MADE TEST IN EFL CLASSROOM AT THE ELEMENTARY SCHOOL AND ITS WASHBACK IN THE LEARNING

2017 ◽  
Vol 2 (2) ◽  
pp. 97-104
Author(s):  
Desrin Lebagi ◽  
S. Sumardi ◽  
S. Sudjoko

One of essential phases in language learning is measurement. Test as a tool of measurement process must then be well constructed. The quality of test itself can be determined through test item analysis. However, in some occasions, teachers tend to ignore test item analysis because of time limitation and other responsibilities.  Referring to this problem, this research aimed to describe the quality of test items including the difficulty index, the discrimination index, the distractor index, and the reliability of the test and the Washback of teacher-made test on students’ motivation in learning English. It was conducted at Gamaliel Elementary School in academic year of 2016-2017. This case study utilized purposive sampling. In collecting the data, the researcher used interview, observation, and document analysis as the techniques of collecting data. The informants were an English teacher and students of Gamaliel Elementary School. The documents were students’ answer sheets. In analyzing test items, the researchers used ITEMAN program. The result of this study shows that the teacher-made test can be classified in good test. The test brings both positive and negative Washback in students’ motivation in learning. Therefore, it is recommended for the teacher to conduct test analysis as a way of evaluating and improving his teaching and learning and test itself as well as to encourage the students to study even though they are not confronted with a test.

2021 ◽  
Vol 6 (2) ◽  
pp. 256
Author(s):  
Sayit Abdul Karim ◽  
Suryo Sudiro ◽  
Syarifah Sakinah

Apart from teaching, English language teachers need to assess their students by giving a test to know the students� achievements. In general, teachers are barely conducting item analysis on their tests. As a result, they have no idea about the quality of their test distributed to the students. The present study attempts to figure out the levels of difficulty (LD) and the discriminating power (DP) of the multiple-choice (MC) test item constructed by an English teacher in the reading comprehension test utilizing test item analysis. This study employs a qualitative approach. For this purpose, a test of 50-MC test items of reading comprehension was obtained from the students� test results. Thirty-five students of grade eight took part in the MC test try-out. They are both male (15) and female (20) students of junior high school 2 Kempo, in West Nusa Tenggara Province. The findings revealed that16 items out of 50 test items were rejected due to the poor and worst quality level of difficulty and discriminating index. Meanwhile, 12 items need to be reviewed due to their mediocre quality, and 11 items are claimed to have good quality items. Besides, 11 items out of 50 test items were considered as the excellent quality as their DP scores reached around 0.44 through 0.78. The implications of the present study will shed light on the quality of teacher-made test items, especially for the MC test.


Author(s):  
Ismail Burud ◽  
Kavitha Nagandla ◽  
Puneet Agarwal

Background: Item analysis is a quality assurance of examining the performance of the individual test items that measures the validity and reliability of exams. This study was performed to evaluate the quality of the test items with respect to their performance on difficulty index (DFI), Discriminatory index (DI) and assessment of functional and non-functional distractors (FD and NFD).Methods: This study was performed on the summative examination undertaken by 113 students. The analyses include 120 one best answers (OBAs) and 360 distractors.Results: Out of the 360 distractors, 85 distractors were chosen by less than 5% with the distractor efficiency of 23.6%. About 47 (13%) items had no NFDs while 51 (14%), 30 (8.3%), and 4 (1.1%) items contained 1, 2, and 3 NFDs respectively. Majority of the items showed excellent difficulty index (50.4%, n=42) and fair discrimination (37%, n=33). The questions with excellent difficulty index and discriminatory index showed statistical significance with 1NFD and 2 NFD (p=0.03).Conclusions: The post evaluation of item performance in any exam in one of the quality assurance method of identifying the best performing item for quality question bank. The distractor efficiency gives information on the overall quality of item.


2019 ◽  
Vol 20 (2) ◽  
pp. 72-87
Author(s):  
Ujang Suparman ◽  

The objectives of this research are to analyze critically the quality of test items used in SMP and SMA (mid semester, final semester, and National Examination Practice) in terms of reliability as a whole, level of difficulty, discriminating power, the quality of answer keys and distractors. The methods used to analyze the test items are item analysis (ITEMAN), two types of descriptive statistics for analyzing test items and another for analyzing the options. The findings of the research are very far from what is believed, that is, the quality of majority of test items as well as key answers and distractors are unsatisfactory. Based the results of the analysis, conclusions are drawn and recommendations are put forward.


Author(s):  
Abhijeet S. Ingale ◽  
Purushottam A. Giri ◽  
Mohan K. Doibale

Background: Item analysis is the process of collecting, summarizing and using information from students’ response to assess the quality of test items. However it is said that MCQs emphasize recall of factual information rather than conceptual understanding and interpretation of concepts. There is more to writing good MCQs than writing good questions. The objectives of the study was to assess the item and test quality of multiple choice questions and to deal with the learning difficulties of students, identify the low achievers in the test. Methods: The hundred MBBS students from Government medical college were examined. A test comprising of thirty MCQs was administered. All items were analysed for Difficulty Index, Discrimination Index and Distractor Efficiency. Data entered in MS Excel 2007 and SPSS 21 analysed with statistical test of significance. Results: Majority 80% items difficulty index is within acceptable range. 63% items showed excellent discrimination Index. Distractor efficiency was overall satisfactory. Conclusions: Multiple choice questions with average difficulty and also having high discriminating power with good distracter efficiency should be incorporated into student’s examination. 


2020 ◽  
Vol 5 (2) ◽  
pp. 491
Author(s):  
Amalia Vidya Maharani ◽  
Nur Hidayanto Pancoro Setyo Putro

Numerous studies have been conducted on the item test analysis in English test. However, investigation on the characteristics of a good test of English final semester test is still rare in several districts in East Java. This research sought to examine the quality of the English final semester test in the academic year of 2018/2019 in Ponorogo. A total of 151 samples in the form of students’ answers to the test were analysed based on item difficulty, item discrimination, and distractors’ effectiveness using Quest program. This descriptive quantitative research revealed that the test does not have good proportion among easy, medium, and difficult item. In the item discrimination, the test had 39 excellent items (97.5%) which meant that the test could discriminate among high and low achievers. Besides, the distractors could distract students since there were 32 items (80%) that had effective distractors. The findings of this research provided insights that item analysis became important process in constructing test. It related to find the quality of the test that directly affects the accuracy of students’ score.


2021 ◽  
Vol 3 (1) ◽  
pp. 11-20
Author(s):  
Ulfah Zahiroh ◽  
Pangoloan Soleman Ritonga

This research aimed at knowing the quality of test item derived from its validity, reliability, difficulty level, discriminator power, and distractor effectiveness. Quantitative descriptive method was used in this research. Interview and documentation were the techniques of collecting the data. The data source used was in evensemester exam questions that were in the forms of multiple-choice, student answer sheet, and answer key. Anates 4.0.9 program was to analyze the quality of test items. The research findings of the analysis of multiple-choice test item quality on semester final exam of Chemistry subject at the eleventh grade of State Islamic Senior High School 2 Kepulauan Meranti showed that in the validity analysis there were 6 valid test items (17%) and 29 non validitems (83%); in the reliability analysis it was obtained the reliability score 0.955; in the difficulty level analysis there were 12 easy test items (34%), 17 medium items (49%), and 6 hard items (17%); in the discriminator power analysis there were 4 very good test items (11.5%), a good item (3%), 19 items (54%) that should be revised, and 11 items (31.5%) that should be eliminated; in the distractor effectiveness there 26 very good options (19%), 10 good options (7%), 25 poor options (18%), 55 bad options (39%), and 24 very bad options (17%). Therefore, it could be concluded that the quality of test items could be stated bad.


Author(s):  
Novi Maulina ◽  
Rima Novirianthy

Background: Assessment and evaluation for students is an essential component of teaching and learning process. Item analysis is the technique of collecting, summarizing, and using students’ response data to assess the quality of the Multiple Choice Question (MCQ) test by measuring indices of difficulty and discrimination, also distracter efficiency. Peer review practices improve quality of assessment validity in evaluating student performance.Method: We analyzed 150 student’s responses for 100 MCQs in Block Examination for its difficulty index (p), discrimination index (D) and distractor efficiency (DE) using Microsoft excel formula. The Correlation of p and D was analyzed using Spearman correlation test by SPSS 23.0. The result was analyzed to evaluate the peer-review strategy.Results: The median of difficulty index (p) was 54% or within the range of excellent level (p 40-60%) and the mean of discrimination index (D) was 0.24 which is reasonably good. There were 7 items with excellent p (40–60%) and excellent D (≥0.4). Nineteen of items had excellent discrimination index (D≥0.4). However,there were 9 items with negative discrimination index and 30 items with poor discrimination index, which should be fully revised. Forty-two of items had 4 functioning distracters (DE 0%) which suggested the teacher to be more precise and carefully creating the distracters.Conclusion: Based on item analysis, there were items to be fully revised. For better test quality, feedback and suggestions for the item writer should also be performed as a part of peer-review process on the basis of item analysis.


Author(s):  
Suryakar Vrushali Prabhunath ◽  
Surekha T. Nemade ◽  
Ganesh D. Ghuge

Introduction: Multiple Choice Questions (MCQs) is one of the most preferred tool of assessment in medical education as a part of formative as well as summative assessment. MCQ performance as an assessment tool can be statistically analysed by Item analysis. Thus, aim of this study is to assess the quality of MCQs by item analysis and identify the valid test items to be included in the question bank for further use. Materials and methods: Formative assessment of Ist MBBS students was carried out with 40 MCQs as a part of internal examination in Biochemistry. Item analysis was done by calculating Difficulty index (P), Discrimination index (d) and number of Non- functional distractors. Results: Difficulty index (P) of 65% (26) items was well within acceptable range, 7.5% (3) items were too difficult whereas 27.5% (11) items were in the category of too easy. Discrimination Index (d) of 70% (28) items fell in recommended category whereas 10% (4) items were with acceptable, and 20% (8) were with poor Discrimination index. Out of 120 distractors 88.33% (106) were functional distractors and 11.66% (14) were non-functional distractors. After considering difficulty index, discrimination index and distractor effectiveness, 42.5% (17) items were found ideal to be included in the question bank. Conclusion: Item analysis remains an essential tool to be practiced regularly to improve the quality of the assessment methods as well as a tool for obtaining feedback for the instructors. Key Words: Difficulty index, Discrimination index, Item analysis, Multiple choice questions, Non-functional distractors


Author(s):  
Leni Amelia Suek

While almost half of the teachers’ activities are assessing their students, they are not well-prepared with assessment literacy training. Hence, they are unable to produce good tests to measure students’ level of knowledge and skills. This study is aimed at analyzing item difficulty and item discrimination of a test made by an English teacher at a junior high school in Kupang. It was descriptive qualitative research and the instruments of the research were test items, answer keys, and students’ answer sheets. For the difficulty index, it was revealed that more than half of the test items were easy, while only 2% of the test items were difficult. In terms of the discrimination index, it was found that only 10% of the test items were excellent and most of the test items (46%) were poor. These findings indicated that the English test had a poor item difficulty index and low item discrimination index. Hence, it did not fulfill the criteria of a good test and could not measure students’ true ability. It is highly recommended for the teachers to improve the test items and for the government to provide assessment training for the teachers so that they can produce good tests.


Sign in / Sign up

Export Citation Format

Share Document