An experimental study of fixation of response by college students in a multiple-choice situation.

1939 ◽  
Vol 25 (5) ◽  
pp. 431-444 ◽  
Author(s):  
R. M. Gottsdanker
2012 ◽  
Vol 1 ◽  
pp. 11.03.IT.1.4
Author(s):  
Jasna Vuk ◽  
David T. Morse

2020 ◽  
Vol 9 (2) ◽  
pp. 57-69
Author(s):  
Onna Brewer ◽  
Orhan Erdem

Present bias—difficulty resisting instant gratification over a future and larger reward (also called delay discounting)—has been associated with various suboptimal behaviors and health outcomes. Several methods have been proposed to produce reductions in this bias and promote self-control. In this randomized experimental study of 137 undergraduate college students, the authors examined the effect of a 10-minute values clarification writing exercise on present bias in a monetary decision-making task compared with a neutral writing activity. While participants in the values clarification condition showed less present-biased behavior, this finding was not statistically significant at the .05 level. Thus, they place emphasis on implications for future research and practice with the aims of reducing present bias and building better communities.


2019 ◽  
Vol 16 (1) ◽  
pp. 59-73 ◽  
Author(s):  
Peter McKenna

PurposeThis paper aims to examine whether multiple choice questions (MCQs) can be answered correctly without knowing the answer and whether constructed response questions (CRQs) offer more reliable assessment.Design/methodology/approachThe paper presents a critical review of existing research on MCQs, then reports on an experimental study where two objective tests (using MCQs and CRQs) were set for an introductory undergraduate course. To maximise completion, tests were kept short; consequently, differences between individuals’ scores across both tests are examined rather than overall averages and pass rates.FindingsMost students who excelled in the MCQ test did not do so in the CRQ test. Students could do well without necessarily understanding the principles being tested.Research limitations/implicationsConclusions are limited by the small number of questions in each test and by delivery of the tests at different times. This meant that statistical average data would be too coarse to use, and that some students took one test but not the other. Conclusions concerning CRQs are limited to disciplines where numerical answers or short and constrained text answers are appropriate.Practical implicationsMCQs, while useful in formative assessment, are best avoided for summative assessments. Where appropriate, CRQs should be used instead.Social implicationsMCQs are commonplace as summative assessments in education and training. Increasing the use of CRQs in place of MCQs should increase the reliability of tests, including those administered in safety-critical areas.Originality/valueWhile others have recommended that MCQs should not be used (Hinchliffe 2014, Srivastavaet al., 2004) because they are vulnerable to guessing, this paper presents an experimental study designed to demonstrate whether this hypothesis is correct.


Sign in / Sign up

Export Citation Format

Share Document