scholarly journals Do Constructed-Response and Multiple-Choice Questions Measure the Same Thing?

Author(s):  
Stephen Hickson ◽  
W. Robert Reed
10.28945/4479 ◽  
2019 ◽  
Vol 18 ◽  
pp. 153-170
Author(s):  
Yolanda Belo ◽  
Sérgio Moro ◽  
António Martins ◽  
Pedro Ramos ◽  
Joana Martinho Costa ◽  
...  

Aim/Purpose: This paper presents a data mining approach for analyzing responses to advanced declarative programming questions. The goal of this research is to find a model that can explain the results obtained by students when they perform exams with Constructed Response questions and with equivalent Multiple-Choice Questions. Background: The assessment of acquired knowledge is a fundamental role in the teaching-learning process. It helps to identify the factors that can contribute to the teacher in the developing of pedagogical methods and evaluation tools and it also contributes to the self-regulation process of learning. However, better format of questions to assess declarative programming knowledge is still a subject of ongoing debate. While some research advocates the use of constructed responses, others emphasize the potential of multiple-choice questions. Methodology: A sensitivity analysis was applied to extract useful knowledge from the relevance of the characteristics (i.e., the input variables) used for the data mining process to compute the score. Contribution: Such knowledge helps the teachers to decide which format they must consider with respect to the objectives and expected students results. Findings: The results shown a set of factors that influence the discrepancy between answers in both formats. Recommendations for Practitioners: Teachers can make an informed decision about whether to choose multiple-choice questions or constructed-response taking into account the results of this study. Recommendation for Researchers: In this study a block of exams with CR questions is verified to complement the area of learning, returning greater performance in the evaluation of students and improving the teaching-learning process. Impact on Society: The results of this research confirm the findings of several other researchers that the use of ICT and the application of MCQ is an added value in the evaluation process. In most cases the student is more likely to succeed with MCQ, however if the teacher prefers to evaluate with CR other research approaches are needed. Future Research: Future research must include other question formats.


2012 ◽  
pp. 1645-1664
Author(s):  
Dimos Triantis ◽  
Errikos Ventouras

The present chapter deals with the variants of grading schemes that are applied in current Multiple-Choice Questions (MCQs) tests. MCQs are ideally suited for electronic examinations, which, as assessment items, are typically developed in the framework of Learning Content Management Systems (LCMSs) and handled, in the cycle of educational and training activities, by Learning Management Systems (LMS). Special focus is placed in novel grading methodologies, that enable to surpass the limitations and drawbacks of the most commonly used grading schemes for MCQs in electronic examinations. The paired MCQs grading method, according to which a set of pairs of MCQs is composed, is presented. The MCQs in each pair are similar concerning the same topic, but this similarity is not evident for an examinee that does not possess adequate knowledge on the topic addressed in the questions of the pair. The adoption of the paired MCQs grading method might expand the use of electronic examinations, provided that the new method proves its equivalence to traditional methods that might be considered as standard, such as constructed response (CR) tests. Research efforts to that direction are presented.


Author(s):  
Yesim Ozer Ozkan ◽  
Nesrin Ozaslan

The aim of this study is to determine the level of achievement of students participating in Programme for International Student Assessment (PISA) 2003 and PISA 2012 tests in Turkey according to questions in the mathematical literacy test. This study is a descriptive survey. Within the scope of the study, the mathematical literacy test items were classified as multiple-choice, complex multiple-choice and constructed response items according to the different question types. The ratio of correct and partially correct and incorrect response given to each question type has been determined. Findings show that the achievements of students differ according to different types of questions. While the question type with the highest success average in the PISA 2003 test was multiple-choice, students got the highest scores from complex multiple-choice questions in the PISA 2012 test. The questionnaire with the lowest success average was found to be complex multiple-choice questions in the PISA 2003 test while students got the lowest scores from constructed response items in the PISA 2012 test. According to the constructivist education approach effectuated in 2005-2006 academic year, it is expected to observe a rise in constructed response question type; however, findings of the study reveal that the success of constructed response questions is decreased according to the application years.


Author(s):  
Dimos Triantis ◽  
Errikos Ventouras

The present chapter deals with the variants of grading schemes that are applied in current Multiple-Choice Questions (MCQs) tests. MCQs are ideally suited for electronic examinations, which, as assessment items, are typically developed in the framework of Learning Content Management Systems (LCMSs) and handled, in the cycle of educational and training activities, by Learning Management Systems (LMS). Special focus is placed in novel grading methodologies, that enable to surpass the limitations and drawbacks of the most commonly used grading schemes for MCQs in electronic examinations. The paired MCQs grading method, according to which a set of pairs of MCQs is composed, is presented. The MCQs in each pair are similar concerning the same topic, but this similarity is not evident for an examinee that does not possess adequate knowledge on the topic addressed in the questions of the pair. The adoption of the paired MCQs grading method might expand the use of electronic examinations, provided that the new method proves its equivalence to traditional methods that might be considered as standard, such as constructed response (CR) tests. Research efforts to that direction are presented.


Author(s):  
Yesim Ozer Ozkan ◽  
Nesrin Özaslan

The aim of this study is to determine the level of achievement of students participating in Programme for International Student Assessment (PISA) 2003 and PISA 2012 tests in Turkey according to questions in the mathematical literacy test. This study is a descriptive survey. Within the scope of the study, the mathematical literacy test items were classified as multiple-choice, complex multiple-choice and constructed response items according to the different question types. The ratio of correct and partially correct and incorrect response given to each question type has been determined. Findings show that the achievements of students differ according to different types of questions. While the question type with the highest success average in the PISA 2003 test was multiple-choice, students got the highest scores from complex multiple-choice questions in the PISA 2012 test. The questionnaire with the lowest success average was found to be complex multiple-choice questions in the PISA 2003 test while students got the lowest scores from constructed response items in the PISA 2012 test. According to the constructivist education approach effectuated in 2005-2006 academic year, it is expected to observe a rise in constructed response question type; however, findings of the study reveal that the success of constructed response questions is decreased according to the application years.


2011 ◽  
Vol 1 ◽  
pp. 119 ◽  
Author(s):  
David DiBattista

Multiple-choice questions are widely used in higher education and have some important advantages over constructed-response test questions. It seems, however, that many teachers underestimate the value of multiple-choice questions, believing them to be useful only for assessing how well students can memorize information, but not for assessing higher-order cognitive skills. Several strategies are presented for generating multiple-choice questions that can effectively assess students’ ability to understand, apply, analyze, and evaluate information.


2021 ◽  
Vol 8 (4) ◽  
pp. 349-360
Author(s):  
Leiv Opstad

The discussion of whether multiple-choice questions can replace the traditional exam with essays and constructed questions in introductory courses has just started in Norway. There is not an easy answer. The findings depend on the pattern of the questions. Therefore, one must be careful in drawing conclusions. In this research, one will explore a selected business course where 30 percent of the test is comprised of multiple-choice items. There obviously are some similarities between the two test methods. Students who perform well on writing essays tend also to achieve good results when answering multiple- choice questions. The result reveals a gender gap where multiple-choice based exam seems to favor the male students. There are some challenges in how to measure the different dimensions of knowledge. This study confirms this. Hence, it is too early to conclude that a multiple-choice score is a good predictor of the outcome of an essay exam. This paper will provide a beneficial contribution to the debate in Norway, but it needs to be followed up with more research. Keywords: multiple choice test, constructed response questions, business school, gender, regression model.


Sign in / Sign up

Export Citation Format

Share Document