Use of Adjustment by Minimum Discriminant Information in Linking Constructed‐Response Test Scores in the Absence of Common Items

2019 ◽  
Vol 56 (2) ◽  
pp. 452-472 ◽  
Author(s):  
Yi‐Hsuan Lee ◽  
Shelby J. Haberman ◽  
Neil J. Dorans
1994 ◽  
Vol 2 (2) ◽  
pp. 133-147 ◽  
Author(s):  
Steven J. Osterlind ◽  
William R. Merz

2019 ◽  
Vol 18 (3) ◽  
pp. 158-176 ◽  
Author(s):  
Nicole Ackermann ◽  
Christin Siegfried

Studies indicate that male students outperform female students in economic literacy and that a specific item format (selected-response, constructed-response) favours either males or females. This study analyses the relationship between item format and gender in economic-civic competence using the WBK-T2 test (“revidierter Test zur wirtschaftsbürgerlichen Kompetenz”). The WBK-T2 encompasses 32 items, of which 53% have a selected-response format and 47% a constructed-response format. To answer the research questions, we used a sample of 375 Swiss high school students and ran T-tests and multiple regression analyses. Male students significantly outperformed female students in the overall test score, in the selected-response test score and in the constructed-response test score, but effect sizes are rather small. Interest in socio-economic issues predicted but did not moderate the test scores; however, prior knowledge in economics did. Our results indicate that the balanced test form of the WBK-T2 regarding selected-response and constructed-response items does overcome the gender gap in overall test scores and format-related test scores for students with prior economic knowledge. However, this does not apply for students without prior knowledge in economics. Thus, there must be other test-external variables, such as prior knowledge in economics that cause the gender gap in economic-civic competence.


2014 ◽  
Vol 22 (3) ◽  
pp. 286-296
Author(s):  
T. Dary Erwin

Purpose – The purpose of this paper is to refine and measure esthetic development. Design/methodology/approach – Three phrases of data collection were conducted utilizing four separate student samples (n = 120, 154, 241, and 343). In Phase I, an initial esthetic development stage model was tested with a constructed response test format using generalizability measurement theory. In Phase II, this conceptual model in esthetic development was refined with a modified constructed response format. In Phase III, a selective response test format was designed with five esthetic development stage scores, which were correlated with several artistic discipline-based and interdisciplinary courses. Findings – Higher esthetic development stages correlate with verbal ability and grades in interdisciplinary general education arts courses. Lower esthetic development stages were associated with lower verbal ability and grades in traditionally taught discipline-based arts courses. Research limitations/implications – What this study did not do is examine whether attendance at arts events and activities support or lead to higher esthetic development. Social implications – People at Stages Four and Five of this esthetic development model are able to compare artistic experience – whether visual or performing art – within a historical and cultural context or perspective. Individuals at these highest stages are able to communicate about the social significance and societal themes of the artistic experience to wider audiences. Originality/value – No accepted model or assessment method about the arts in higher education is available. Although the arts are commonly accepted as important in higher education, there is a paucity of research about esthetic development in the curriculum. This paper attempts to address this gap, in part, and to advance further study about quality of arts’ programs and activities in higher education.


2011 ◽  
Vol 1 ◽  
pp. 119 ◽  
Author(s):  
David DiBattista

Multiple-choice questions are widely used in higher education and have some important advantages over constructed-response test questions. It seems, however, that many teachers underestimate the value of multiple-choice questions, believing them to be useful only for assessing how well students can memorize information, but not for assessing higher-order cognitive skills. Several strategies are presented for generating multiple-choice questions that can effectively assess students’ ability to understand, apply, analyze, and evaluate information.


Sign in / Sign up

Export Citation Format

Share Document