scholarly journals Analyzing Cognitive Demands of a Scientific Reasoning Test Using the Linear Logistic Test Model (LLTM)

2021 ◽  
Vol 11 (9) ◽  
pp. 472 ◽  
Author(s):  
Moritz Krell ◽  
Samia Khan ◽  
Jan van Driel

The development and evaluation of valid assessments of scientific reasoning are an integral part of research in science education. In the present study, we used the linear logistic test model (LLTM) to analyze how item features related to text complexity and the presence of visual representations influence the overall item difficulty of an established, multiple-choice, scientific reasoning competencies assessment instrument. This study used data from n = 243 pre-service science teachers from Australia, Canada, and the UK. The findings revealed that text complexity and the presence of visual representations increased item difficulty and, in total, contributed to 32% of the variance in item difficulty. These findings suggest that the multiple-choice items contain the following cognitive demands: encoding, processing, and combining of textually presented information from different parts of the items and encoding, processing, and combining information that is presented in both the text and images. The present study adds to our knowledge of which cognitive demands are imposed upon by multiple-choice assessment instruments and whether these demands are relevant for the construct under investigation—in this case, scientific reasoning competencies. The findings are discussed and related to the relevant science education literature.

Author(s):  
Bettina Hagenmüller

Abstract. The multiple-choice item format is widely used in test construction and Large-Scale Assessment. So far, there has been little research on the impact of the position of the solution among the response options and the few existing results are even inconsistent. Since it would be an easy way to create parallel items for group setting by altering the response options, the influence of the response options’ position on item difficulty should be examined. The Linear Logistic Test Model ( Fischer, 1972 ) was used to analyze the data of 829 students aged 8–20 years, who worked on general knowledge items. It was found that the position of the solution among the response options has an influence on item difficulty. Items are easiest when the solution is in first place and more difficult when the solution is placed in a middle position or at the end of the set of response options.


2017 ◽  
Vol 18 (4) ◽  
pp. 559-571 ◽  
Author(s):  
George Papageorgiou ◽  
Vasilios Amariotakis ◽  
Vasiliki Spiliotopoulou

The main objective of this work is to analyse the visual representations (VRs) of the microcosm depicted in nine Greek secondary chemistry school textbooks of the last three decades in order to construct a systemic network for their main conceptual framework and to evaluate the contribution of each one of the resulting categories to the network. The sample comprises a total number of 221 VRs of microcosm, 66 of which are VRs of the 8th grade, 92 of the 9th grade and 63 of the 10th grade. For the qualitative analysis of VRs the phenomenographic method was implemented, whereas a basic quantitative analysis followed. Results provide us with a network that can help science teachers and textbooks designers in identifying the plethora of codes employed in these VRs and the plethora of ways in which VRs can be used, as well as, in determining possible causes of relevant students' misconceptions. Quantitative analysis indicates an effect of grade on the content of VRs and relevant implications for science education are discussed.


2015 ◽  
Vol 223 (1) ◽  
pp. 47-53 ◽  
Author(s):  
Stefan Hartmann ◽  
Annette Upmeier zu Belzen ◽  
Dirk Krüger ◽  
Hans Anand Pant

The aim of this study was to develop a standardized test addressed to measure preservice science teachers’ scientific reasoning skills, and to initially evaluate its psychometric properties. We constructed 123 multiple-choice items, using 259 students’ conceptions to generate highly attractive multiple-choice response options. In an item response theory-based validation study (N = 2,247), we applied multiple regression analyses to test hypotheses based on groups with known attributes. As predicted, graduate students performed better than undergraduate students, and students who studied two natural science disciplines performed better than students who studied only one natural science discipline. In contrast to our initial hypothesis, preservice science teachers performed less well than a control group of natural sciences students. Remarkably, an interaction effect of the degree program (bachelor vs. master) and the qualification (natural sciences student vs. preservice teacher) was found, suggesting that preservice science teachers’ learning opportunities to explicitly discuss and reflect on the inquiry process have a positive effect on the development of their scientific reasoning skills. We conclude that the evidence provides support for the criterion-based validity of our interpretation of the test scores as measures of scientific reasoning competencies.


2018 ◽  
Vol 10 (1) ◽  
pp. 41
Author(s):  
Hasan Ozgur Kapici ◽  
Hakan Akcay

Learning in laboratories for students is not only crucial for conceptual understanding but also contributes to gaining scientific reasoning skills. Following fast developments in technology, online laboratory environments have been improved considerably and nowadays form an attractive alternative for hands-on laboratories. The study was done in order to reveal pre- service science teachers’ preferences for hands-on or online laboratory environments. Participants of the study were 41 pre-service science teachers who were enrolled in a 13 weeks course on Laboratory Applications in Science Education. Findings showed that more than half of pre-service science teachers would like to prefer to use hands-on laboratory environments for both conceptual teaching in their classrooms and to develop their students’ science process skills. The reasons behind their choices are discussed.


Author(s):  
Mohammad Ghahramanlou ◽  
Zahra Zohoorian ◽  
Purya Baghaei

The purpose of this study is to examine the cognitive processes underlying the listening comprehension section of IELTS and to investigate if they vary in terms of difficulty. For this purpose, a checklist of possible cognitive operations was prepared based on the literature and the candidates’ feedback. The checklist consisted of six cognitive operations. A sample of IELTS listening test was given to 310 upper intermediate and advanced students of English. Linear logistic test model was employed to analyse the data. Findings showed that keeping up with the pace of the speaker and understanding reduced forms were the most difficult operations for the listeners. Altogether, the six operations explained 72% of the variance in item difficulty estimates. Implications of the study for the testing and teaching of listening comprehension are discussed.


2016 ◽  
Vol 38 (4) ◽  
Author(s):  
Christine Hohensinn ◽  
Klaus D. Kubinger

Educational and psychological aptitude and achievement tests employ a variety of different response formats. Today the predominating format is a multiple-choice format with a single correct answer option (at most out of altogether four answer options) because of the possibility for fast, economical and objective scoring. But it is often mentioned that multiple-choice questions are easier than those with constructed response format, for whichthe examinee has to create the solution without answer options. The present study investigates the influence of three different response formats on the difficulty of an item using stem-equivalent items in a mathematical competence test. Impact of formats is modelled applying the Linear Logistic Test model (Fischer, 1974) appertaining to Item Response Theory. In summary, the different response formats measure the same latent trait but bias the difficulty of the item.


2017 ◽  
Author(s):  
Fahimeh Khoshdel

The C-Test is a gap-filling test belonging to the family of the reduced redundancy tests which is used as an overall measure of general language proficiency in a second or a native language. There is no consensus on the construct underlying the C-Test and many researchers are still puzzled by what is actually activated when examinees take a C-Test. The purpose of the present study is to cast light on this issue by examining the factors that contribute to C-Test item difficulty. A number of factors were selected and entered into regression model to predict item difficulty. Linear logistic test model was also used to support the results of regression analysis. Findings showed that the selected factors only explained 12 per cent of the variance in item difficulty estimates. Implications of the study for C-Test validity and application are discussed.


2021 ◽  
Vol 6 ◽  
Author(s):  
Jere Confrey ◽  
Meetal Shah ◽  
Emily Toutkoushian

This study reports how a validation argument for a learning trajectory (LT) is constituted from test design, empirical recovery, and data use through a collaborative process, described as a “trading zone” among learning scientists, psychometricians, and practitioners. The validation argument is tied to a learning theory about learning trajectories and a framework (LT-based data-driven decision-making, or LT-DDDM) to guide instructional modifications. A validation study was conducted on a middle school LT on “Relations and Functions” using a Rasch model and stepwise regression. Of five potentially non-conforming items, three were adjusted, one retained to collect more data, and one was flagged as a discussion item. One LT level description was revised. A linear logistic test model (LLTM) revealed that LT level and item type explained substantial variance in item difficulty. Using the LT-DDDM framework, a hypothesized teacher analysis of a class report led to three conjectures for interventions, demonstrating the LT assessment’s potential to inform instructional decision-making.


2010 ◽  
Vol 65 (4) ◽  
pp. 257-282
Author(s):  
전유아 ◽  
신택수

Sign in / Sign up

Export Citation Format

Share Document