correction for guessing
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 0)

H-INDEX

10
(FIVE YEARS 0)

2020 ◽  
Vol 20 (3) ◽  
Author(s):  
Maria Paz Espinosa ◽  
Javier Gardeazabal

AbstractThis paper analyzes gender differences in student performance in Multiple-Choice Tests (MCT). We report evidence from a field experiment suggesting that, when MCT use a correction for guessing formula to obtain test scores, on average women tend to omit more items, get less correct answers and lower grades than men. We find that the gender difference in average test scores is concentrated at the upper tail of the distribution of scores. In addition, gender differences strongly depend on the framing of the scoring rule.


2018 ◽  
Vol 8 (3) ◽  
pp. 567-598
Author(s):  
Didem Özdoğan ◽  
Nuri Doğan

This study examines the effect of self-assessment-based chance success on psychometric characteristics of the test. First, the data was cleared of chance success by means of correction-for-guessing formula and self-assessment, and then statistical analyses were conducted. Item discriminations showed an increase when the correction-for-guessing formula was used; and when self-assessment was used, they showed variability. Test validity increased when correction formula was used; and when self-assessment was used, a slight decrease was observed. Besides, this study examined the effect of correction for chance success upon corrected self-assessment based on IRT guessing parameter. It was observed that the data that were not corrected in accordance with chance scores had higher guessing parameters than those corrected in accordance with self-assessment. In addition, it was evident that the difference between the guessing parameters of the uncorrected data and the data cleared of chance scores by means of self-assessment was significant. It was also revealed that the correction of self-assessment-based chance success have an advantage over classical correction for guessing formula on psychometric characteristics of the test.


Author(s):  
John J. Barnard

This article briefly touches on how different measurement theories can be used to score responses on multiple choice questions (MCQs). How missing data is treated may have a profound effect on a person’s score and is dealt with most elegantly in modern theories. The issue of guessing a correct answer has been a topic of discussion for many years. It is asserted that test takers almost never have no knowledge whatsoever of the content in an appropriate test and therefore tend to make educated guesses rather than random guesses. Problems related to the classical correction for guessing is highlighted and the Rasch approach to use fit statistics to identify possible guessing, is briefly discussed. The threeparameter ‘logistic’ item response theory (IRT) model includes a ‘guessing item parameter’ to indicate the chances that a test taker guessed the correct answer to an item. However, it is pointed out that it is a person that guesses, not an item, and therefore a guessing parameter should be a person parameter. Option probability theory (OPT) purports to overcome this problem through requiring an indication of the degree of certainty the test taker has that a particular option is the correct one. Realistic allocations of these probabilities indicate the degree of guessing and hence more precise measures of ability.


2008 ◽  
Vol 72 (10) ◽  
pp. 1149-1159 ◽  
Author(s):  
Thomas J. Prihoda ◽  
R. Neal Pinckard ◽  
C. Alex McMahan ◽  
John H. Littlefield ◽  
Anne Cale Jones

Sign in / Sign up

Export Citation Format

Share Document