On the use of a three‐words‐per‐item format in tests for the hearing of speech

1980 ◽  
Vol 67 (1) ◽  
pp. 345-347 ◽  
Author(s):  
J. Donald Harris
Keyword(s):  
2011 ◽  
Vol 39 (1) ◽  
pp. 119-128 ◽  
Author(s):  
John F. Rauthmann

In this article I argue that as well as item content, item formats (i.e., phrasing and response formats) are also important. Most trait items can be mapped onto 4 dimensions: point of reference (first person, possessive, others, indicator), general item format (staticity, frequency, valency, frequency + valency), construct indicator (attributal, behavioral, mental, contextual), and conditionality (unconditional, conditional). An item taxonomy tree for the first person perspective is provided for an Openness to Experiences item, and NEO-PI-R Extraversion items are analyzed according to the 4-item format dimensions. Future lines of research on item phrasing are outlined.


2013 ◽  
Vol 13 (4) ◽  
pp. 295-313 ◽  
Author(s):  
Brian J. Hess ◽  
Mary M. Johnston ◽  
Rebecca S. Lipner

1993 ◽  
Vol 32 (6) ◽  
pp. 6-11
Author(s):  
Robert E. Llaneras ◽  
Thadeus L. Arrington ◽  
Robert W. Swezey ◽  
Dennis L. Faust
Keyword(s):  

2000 ◽  
Vol 13 (1) ◽  
pp. 55-77 ◽  
Author(s):  
Christine E. DeMars
Keyword(s):  

1980 ◽  
Vol 7 (3) ◽  
pp. 150-153
Author(s):  
Georganne White-Blackburn ◽  
Timothy C. Blackburn ◽  
John R. Lutzker

From a counterbalanced reversal design experiment, it is concluded that quiz item format makes no difference in test performance.


2018 ◽  
Vol 47 (5) ◽  
pp. 284-294 ◽  
Author(s):  
Sean F. Reardon ◽  
Demetra Kalogrides ◽  
Erin M. Fahle ◽  
Anne Podolsky ◽  
Rosalía C. Zárate

Prior research suggests that males outperform females, on average, on multiple-choice items compared to their relative performance on constructed-response items. This paper characterizes the extent to which gender achievement gaps on state accountability tests across the United States are associated with those tests’ item formats. Using roughly 8 million fourth- and eighth-grade students’ scores on state assessments, we estimate state- and district-level math and reading male-female achievement gaps. We find that the estimated gaps are strongly associated with the proportions of the test scores based on multiple-choice and constructed-response questions on state accountability tests, even when controlling for gender achievement gaps as measured by the National Assessment of Educational Progress (NAEP) or Northwest Evaluation Association (NWEA) Measures of Academic Progress (MAP) assessments, which have the same item format across states. We find that test item format explains approximately 25% of the variation in gender achievement gaps among states.


Author(s):  
Matthew Debell ◽  
Catherine Wilson ◽  
Simon Jackman ◽  
Lucila Figueroa

Abstract This article reports the results of an experiment comparing branch, grid, and single-item question formats in an internet survey with a nationally representative probability sample. We compare the three formats in terms of administration time, item nonresponse, survey breakoff rates, response distribution, and criterion validity. On average, the grid format obtained the fastest answers, the single-item format was intermediate, and the branch format took the longest. Item nonresponse rates were lowest for the single-item format, intermediate for the grid, and highest for branching, but these results were not statistically significant when modeling the full experimental design. Survey breakoff rates among the formats are not statistically distinguishable. Criterion validity was weakest in the branching format, and there was no significant difference between the grid and single-item formats. This evidence indicates that the branching format is not well suited to internet data collection and that both single-item and short, well-constructed grids are better question formats.


Sign in / Sign up

Export Citation Format

Share Document