scholarly journals DEVELOPMENT OF MATHEMATICS READING ASSESSMENT: PSYCHOMETRIC EVALUATION BASED ON SEM AND IRT

2021 ◽  
Vol 6 (38) ◽  
pp. 46-56
Author(s):  
Yuan-Horng Lin ◽  
Liang-Ting Tsai

The purpose of this study is to evaluate the psychometric properties of mathematics reading assessment based on structural equation model (SEM) and item response theory (IRT). The mathematics reading assessment is conducted in the form of text-reading related to real-life probability and statistics issues. The sample is 789 sixth graders from 13 primary schools in Taiwan. The latent construct of mathematics reading includes three factors which are based on the content area reading components provided by M. C. McKenna and R. D. Robinson. These three factors of mathematics reading assessment in this study are general reading comprehension, prior knowledge of mathematics, and mathematics-specific skills respectively. Firstly, the researcher develops the mathematics reading assessment and confirms the item difficulties, item discriminations, and reliability of assessment. A structural equation model (SEM) is adopted to test the factor construct of the mathematics reading assessment. According to the analysis, it shows that the item difficulties, item discriminations, and reliability are very well. The structural equation model reveals there exist three factors that conform to the assumption of content area reading components mentioned above. Secondly, this study calibrates item characteristics by the three-parameter logistic model of item response theory (IRT). Results indicate that all item fitness performs well. Items belonging to prior knowledge of mathematics have the highest item discriminations. Items of mathematics-specific skills have the highest item difficulties. This study establishes a well-structured mathematics reading assessment. Results could also provide references for instruction and assessment of mathematics reading. Finally, based on the findings of this study, some recommendations and suggestions for future researches and methodologies are also discussed.

2019 ◽  
Vol 4 (9) ◽  
pp. 157-164
Author(s):  
Ioannis Katsenos ◽  
Spyros Papadakis ◽  
George S. Androulakis

Assessment of an educational program/course, based on quantitative data, is attempted in this study, by using the final deliverables of the trainees and assess them according to a predefined set of items connected to the desired Learning Outcomes and a predefined scale for each item. The statistical analysis of the items’ grades, first using factor analysis and then using an Item Response Theory model, gives an indication of the Learning Outcomes’ degree of achievement and consequently guides the training designers to modify training strategies for a potential next cycle of the training program/course. For this study, a teacher training course on flipped classroom methodology, has been used and the above concept was tested. Our analysis using Item Response Theory, revealed the Learning Outcomes partially or not at all achieved showing very good agreement with trainers’ intuitive observations. For the future, the use of such a quantitative assessment could involve Structural Equation Modelling (SEM) tools to assess the relations among learning outcomes, prior knowledge and teaching practices and temporal analysis during training course execution using not only final data but also data from intermediate phases.


2020 ◽  
Vol 10 (2) ◽  
pp. 259
Author(s):  
Yusuf F. Zakariya ◽  
Hans Kristian Nilsen ◽  
Simon Goodchild ◽  
Kirsten Bjørkestøl

The importance of students’ prior knowledge to their current learning outcomes cannot be overemphasised. Students with adequate prior knowledge are better prepared for the current learning materials than those without the knowledge. However, assessment of engineering students' prior mathematics knowledge has been beset with a lack of uniformity in measuring instruments and inadequate validity studies. This study attempts to provide evidence of validity and reliability of a Norwegian national test of prior mathematics knowledge using an explanatory sequential mixed-methods approach. This approach involves use of an item response theory model followed by cognitive interviews of some students among 201 first-year engineering students that constitute the sample of the study. The findings confirm an acceptable construct validity for the test with reliable items and a high-reliability coefficient of .92 on the whole test. Mixed results are found on discrimination and difficulty indices of questions on the test with some questions having unacceptable discriminations and require improvement, some are easy, and some appear too tricky questions for students. Results from the cognitive interviews reveal the likely reasons for students' difficulty on some questions to be lack of proper understanding of the questions, text misreading, improper grasping of word-problem tasks, and unavailability of calculators. The findings underscore the significance of validity and reliability checks of test instruments and their effect on scoring and computing aggregate scores. The methodological approaches to validity and reliability checks in the present study can be applied to other national contexts.


Sign in / Sign up

Export Citation Format

Share Document