استخدام نظرية الاستجابة للفقرة في بناء اختبار محكي المرجع في اللغة الإنجليزية (1) لطلبة جامعة القدس المفتوحة وفق نموذج (راش) = Using Item Response Theory for Constructing a Criterion-Referenced Test in English Language (1) for Al-Quds Open University Students According to Rasch Model

Author(s):  
عبد الهادي وجيه صباح
2016 ◽  
Vol 44 (2) ◽  
pp. 226-236
Author(s):  
Joseph R. Miles ◽  
Brent Mallinckrodt ◽  
Daniela A. Recabarren

2017 ◽  
Vol 35 (2) ◽  
pp. 297-317 ◽  
Author(s):  
Tanya Longabach ◽  
Vicki Peyton

K–12 English language proficiency tests that assess multiple content domains (e.g., listening, speaking, reading, writing) often have subsections based on these content domains; scores assigned to these subsections are commonly known as subscores. Testing programs face increasing customer demands for the reporting of subscores in addition to the total test scores in today’s accountability-oriented educational environment. Although reporting subscores can provide much-needed information for teachers, administrators, and students about proficiency in the test domains, one of the major drawbacks of subscore reporting includes their lower reliability as compared to the test as a whole. In addition, viewing language domains as if they were not interrelated, and reporting subscores without considering this relationship between domains, may be contradictory to the theory of language acquisition. This study explored several methods of assigning subscores to the four domains of a state English language proficiency test, including classical test theory (CTT)-based number correct, unidimensional item response theory (UIRT), augmented item response theory (A-IRT), and multidimensional item response theory (MIRT), and compared the reliability and precision of these different methods across language domains and grade bands. The first two methods assessed proficiency in the domains separately, without considering the relationship between domains; the last two methods took into consideration relationships between domains. The reliability and precision of the CTT and UIRT methods were similar and lower than those of A-IRT and MIRT for most domains and grade bands; MIRT was found to be the most reliable method. Policy implications and limitations of this study, as well as directions for further research, are discussed.


Author(s):  
Stella Eteng-Uket

The study investigated detecting differential item functioning using item response theory in West African Senior School Certificate English language test in south-south Nigeria. 2 research questions were formulated to guide the study. Using descriptive research survey design for the study, study population was 117845 Senior Secondary 3 students in Edo, Delta, Rivers and Bayelsa state. A sample of 1309 (604 males, 705 females) drawn through multi stage sampling technique was used for the study. Two valid instruments titled: Socio-economic status questionnaire (SSQ) and WASSCE/SSCE English language objective test (ELOT) were used to collect data for the study. The reliability indices of the instruments were estimated using the Cronbach Alpha method of internal consistency and Richard Kuderson 20 with coefficient values of .84 for the English Language objective test and .71 for the socio-economic status questionnaire respectively. Chi-square and Lord Wald test statistics statistical technique employed by Item Response Theory for Patient Reported Outcome (IRTPRO) was the technique used in data analysis which provided answers to the research questions at.05 level of significance. On analysis, the result revealed that 13 items functioned differently significant between the male and female group and significantly 23 items differentially functioned between High and low socio-economic status group. Thus, this shows 18% DIF based on gender and 32% based on socio-economic status indicating large DIF and items that are potentially biased. Based on the findings, recommendation were made and one among others was that Item Response theory should be used as DIF detection method by large scale public examination and test developers.


Sign in / Sign up

Export Citation Format

Share Document