scholarly journals Estimation of Latent Regression Item Response Theory Models Using a Second-Order Laplace Approximation

2020 ◽  
pp. 107699862094519
Author(s):  
Björn Andersson ◽  
Tao Xin

The estimation of high-dimensional latent regression item response theory (IRT) models is difficult because of the need to approximate integrals in the likelihood function. Proposed solutions in the literature include using stochastic approximations, adaptive quadrature, and Laplace approximations. We propose using a second-order Laplace approximation of the likelihood to estimate IRT latent regression models with categorical observed variables and fixed covariates where all parameters are estimated simultaneously. The method applies when the IRT model has a simple structure, meaning that each observed variable loads on only one latent variable. Through simulations using a latent regression model with binary and ordinal observed variables, we show that the proposed method is a substantial improvement over the first-order Laplace approximation with respect to the bias. In addition, the approach is equally or more precise to alternative methods for estimation of multidimensional IRT models when the number of items per dimension is moderately high. Simultaneously, the method is highly computationally efficient in the high-dimensional settings investigated. The results imply that estimation of simple-structure IRT models with very high dimensions is feasible in practice and that the direct estimation of high-dimensional latent regression IRT models is tractable even with large sample sizes and large numbers of items.

2021 ◽  
Vol 117 ◽  
pp. 106849
Author(s):  
Danilo Carrozzino ◽  
Kaj Sparle Christensen ◽  
Giovanni Mansueto ◽  
Fiammetta Cosci

2021 ◽  
Vol 8 (3) ◽  
pp. 672-695
Author(s):  
Thomas DeVaney

This article presents a discussion and illustration of Mokken scale analysis (MSA), a nonparametric form of item response theory (IRT), in relation to common IRT models such as Rasch and Guttman scaling. The procedure can be used for dichotomous and ordinal polytomous data commonly used with questionnaires. The assumptions of MSA are discussed as well as characteristics that differentiate a Mokken scale from a Guttman scale. MSA is illustrated using the mokken package with R Studio and a data set that included over 3,340 responses to a modified version of the Statistical Anxiety Rating Scale. Issues addressed in the illustration include monotonicity, scalability, and invariant ordering. The R script for the illustration is included.


2021 ◽  
pp. 43-48
Author(s):  
Rosa Fabbricatore ◽  
Francesco Palumbo

Evaluating learners' competencies is a crucial concern in education, and home and classroom structured tests represent an effective assessment tool. Structured tests consist of sets of items that can refer to several abilities or more than one topic. Several statistical approaches allow evaluating students considering the items in a multidimensional way, accounting for their structure. According to the evaluation's ending aim, the assessment process assigns a final grade to each student or clusters students in homogeneous groups according to their level of mastery and ability. The latter represents a helpful tool for developing tailored recommendations and remediations for each group. At this aim, latent class models represent a reference. In the item response theory (IRT) paradigm, the multidimensional latent class IRT models, releasing both the traditional constraints of unidimensionality and continuous nature of the latent trait, allow to detect sub-populations of homogeneous students according to their proficiency level also accounting for the multidimensional nature of their ability. Moreover, the semi-parametric formulation leads to several advantages in practice: It avoids normality assumptions that may not hold and reduces the computation demanding. This study compares the results of the multidimensional latent class IRT models with those obtained by a two-step procedure, which consists of firstly modeling a multidimensional IRT model to estimate students' ability and then applying a clustering algorithm to classify students accordingly. Regarding the latter, parametric and non-parametric approaches were considered. Data refer to the admission test for the degree course in psychology exploited in 2014 at the University of Naples Federico II. Students involved were N=944, and their ability dimensions were defined according to the domains assessed by the entrance exam, namely Humanities, Reading and Comprehension, Mathematics, Science, and English. In particular, a multidimensional two-parameter logistic IRT model for dichotomously-scored items was considered for students' ability estimation.


2020 ◽  
Vol 44 (7-8) ◽  
pp. 566-567
Author(s):  
Shaoyang Guo ◽  
Chanjin Zheng ◽  
Justin L. Kern

A recently released R package IRTBEMM is presented in this article. This package puts together several new estimation algorithms (Bayesian EMM, Bayesian E3M, and their maximum likelihood versions) for the Item Response Theory (IRT) models with guessing and slipping parameters (e.g., 3PL, 4PL, 1PL-G, and 1PL-AG models). IRTBEMM should be of interest to the researchers in IRT estimation and applying IRT models with the guessing and slipping effects to real datasets.


2010 ◽  
Vol 2010 ◽  
pp. 1-6 ◽  
Author(s):  
M. Elaine Cress ◽  
Yasuyuki Gondo ◽  
Adam Davey ◽  
Shayne Anderson ◽  
Seock-Ho Kim ◽  
...  

Centenarians display a broad variation in physical abilities, from independence to bed-bound immobility. This range of abilities makes it difficult to evaluate functioning using a single instrument. Using data from a population-based sample of 244 centenarians (MAge= 100.57 years, 84.8% women, 62.7% institutionalized, and 21.3% African American) and 80 octogenarians (MAge= 84.32 years, 66.3% women, 16.3% institutionalized, and 17.5% African American) we (1) provide norms on the Short Physical Performance Battery and (2) extend the range of this scale using performance on additional tasks and item response theory (IRT) models, reporting information on concurrent and predictive validity of this approach. Using the original SPPB scoring criteria, 73.0% of centenarian men and 86.0% of centenarian women are identified as severely impaired by the scale's original classification scheme. Results suggest that conventional norms for older adults need substantial revision for centenarian populations and that item response theory methods can be helpful to address floor and ceiling effects found with any single measure.


2017 ◽  
Vol 41 (7) ◽  
pp. 512-529 ◽  
Author(s):  
William R. Dardick ◽  
Brandi A. Weiss

This article introduces three new variants of entropy to detect person misfit ( Ei, EMi, and EMRi), and provides preliminary evidence that these measures are worthy of further investigation. Previously, entropy has been used as a measure of approximate data–model fit to quantify how well individuals are classified into latent classes, and to quantify the quality of classification and separation between groups in logistic regression models. In the current study, entropy is explored through conceptual examples and Monte Carlo simulation comparing entropy with established measures of person fit in item response theory (IRT) such as lz, lz*, U, and W. Simulation results indicated that EMi and EMRi were successfully able to detect aberrant response patterns when comparing contaminated and uncontaminated subgroups of persons. In addition, EMi and EMRi performed similarly in showing separation between the contaminated and uncontaminated subgroups. However, EMRi may be advantageous over other measures when subtests include a small number of items. EMi and EMRi are recommended for use as approximate person-fit measures for IRT models. These measures of approximate person fit may be useful in making relative judgments about potential persons whose response patterns do not fit the theoretical model.


2005 ◽  
Vol 28 (3) ◽  
pp. 264-282 ◽  
Author(s):  
Chih-Hung Chang ◽  
Bryce B. Reeve

This article provides an overview of item response theory (IRT) models and how they can be appropriately applied to patient-reported outcomes (PROs) measurement. Specifically, the following topics are discussed: (a) basics of IRT, (b) types of IRT models, (c) how IRT models have been applied to date, and (d) new directions in applying IRT to PRO measurements.


2019 ◽  
Vol 45 (3) ◽  
pp. 339-368 ◽  
Author(s):  
Chun Wang ◽  
Steven W. Nydick

Recent work on measuring growth with categorical outcome variables has combined the item response theory (IRT) measurement model with the latent growth curve model and extended the assessment of growth to multidimensional IRT models and higher order IRT models. However, there is a lack of synthetic studies that clearly evaluate the strength and limitations of different multilevel IRT models for measuring growth. This study aims to introduce the various longitudinal IRT models, including the longitudinal unidimensional IRT model, longitudinal multidimensional IRT model, and longitudinal higher order IRT model, which cover a broad range of applications in education and social science. Following a comparison of the parameterizations, identification constraints, strengths, and weaknesses of the different models, a real data example is provided to illustrate the application of different longitudinal IRT models to model students’ growth trajectories on multiple latent abilities.


Sign in / Sign up

Export Citation Format

Share Document