The Estimation of Measurement Error in Panel Data

1970 ◽  
Vol 35 (1) ◽  
pp. 112 ◽  
Author(s):  
David E. Wiley ◽  
James A. Wiley
Keyword(s):  
Author(s):  
Victor H Aguiar ◽  
Nail Kashaev

Abstract A long-standing question about consumer behaviour is whether individuals’ observed purchase decisions satisfy the revealed preference (RP) axioms of the utility maximization theory (UMT). Researchers using survey or experimental panel data sets on prices and consumption to answer this question face the well-known problem of measurement error. We show that ignoring measurement error in the RP approach may lead to overrejection of the UMT. To solve this problem, we propose a new statistical RP framework for consumption panel data sets that allows for testing the UMT in the presence of measurement error. Our test is applicable to all consumer models that can be characterized by their first-order conditions. Our approach is non-parametric, allows for unrestricted heterogeneity in preferences and requires only a centring condition on measurement error. We develop two applications that provide new evidence about the UMT. First, we find support in a survey data set for the dynamic and time-consistent UMT in single-individual households, in the presence of nonclassical measurement error in consumption. In the second application, we cannot reject the static UMT in a widely used experimental data set in which measurement error in prices is assumed to be the result of price misperception due to the experimental design. The first finding stands in contrast to the conclusions drawn from the deterministic RP test of Browning (1989, International Economic Review, 979–992). The second finding reverses the conclusions drawn from the deterministic RP test of Afriat (1967, International Economic Review, 8, 6–77) and Varian (1982, Econometrica, 945–973).


2009 ◽  
Vol 39 (1) ◽  
pp. 293-326 ◽  
Author(s):  
Bruce Western ◽  
Deirdre Bloome

Regression-based studies of inequality model only between-group differences, yet often these differences are far exceeded by residual inequality. Residual inequality is usually attributed to measurement error or the influence of unobserved characteristics. We present a model, called variance function regression, that includes covariates for both the mean and variance of a dependent variable. In this model, the residual variance is treated as a target for analysis. In analyses of inequality, the residual variance might be interpreted as measuring risk or insecurity. Variance function regressions are illustrated in an analysis of panel data on earnings among released prisoners in the National Longitudinal Survey of Youth. We extend the model to a decomposition analysis, relating the change in inequality to compositional changes in the population and changes in coefficients for the mean and variance. The decomposition is applied to the trend in U.S. earnings inequality among male workers, 1970 to 2005.


1989 ◽  
Vol 1 ◽  
pp. 25-60 ◽  
Author(s):  
Stanley Feldman

The problem of response instability in survey measures of policy positions has been studied for over 20 years without any apparent resolution. Two major interpretations remain: Philip Converse's nonattitudes model and a measurement error model. One reason why neither interpretation has as yet been rejected or well supported is that previous analyses have depended on three-wave panel data that do not contain sufficient information to assess the goodness-of-fit of the models and also provide unreliable estimates of the error variance for the issue questions. Using five-wave panel data, this article first re-estimates the measurement models for the issue positions to assess goodness-of-fit and then estimates models of response instability to help establish its determinants. Evidence consistent with both interpretations of response instability is found. It thus appears as if neither model can adequately deal with the empirical characteristics of opinion questions in panel data. In the conclusion, a third interpretation of the response instability problem is offered that better accounts for the empirical findings and is more consistent with our understanding of public opinion.


Author(s):  
Erik Meijer ◽  
Edward Oczkowski ◽  
Tom Wansbeek

Abstract Measurement error biases OLS results. When the measurement error variance in absolute or relative (reliability) form is known, adjustment is simple. We link the (known) estimators for these cases to GMM theory and provide simple derivations of their standard errors. Our focus is on the test statistics. We show monotonic relations between the t-statistics and $$R^2$$ R 2 s of the (infeasible) estimator if there was no measurement error, the inconsistent OLS estimator, and the consistent estimator that corrects for measurement error and show the relation between the t-value and the magnitude of the assumed measurement error variance or reliability. We also discuss how standard errors can be computed when the measurement error variance or reliability is estimated, rather than known, and we indicate how the estimators generalize to the panel data context, where we have to deal with dependency among observations. By way of illustration, we estimate a hedonic wine price function for different values of the reliability of the proxy used for the wine quality variable.


2017 ◽  
Vol 46 (2) ◽  
pp. 308-335 ◽  
Author(s):  
Michelle Torres ◽  
Steven S. Smith

In their 2011 piece, Smith et al. argue that there is a set of fundamental or bedrock values that predict ideology and that are strongly influenced by genetics. These values are considered universal, stable, and less susceptible to environmental changes. Smith et al. propose a scale to measure such values: the Society Works Best Index (SWBI). This is an important contribution, but the SWBI requires further evaluation. Using novel panel data, we evaluate the measure, improve on the empirical application with a national panel, and suggest improvements in the scale. We find that the SWBI is no more stable than other measures of ideology and that the observed changes are attributed to measurement error and environmental factors. Furthermore, like many other political attitudes, its predictive power is mediated by levels of political interest.


2018 ◽  
Vol 54 (02) ◽  
pp. 1850002
Author(s):  
FANG WANG ◽  
SHUO CHEN ◽  
DAN WANG

The occasional “strike hard” campaigns against crime launched by the Chinese government provide an opportunity to isolate the separate effects of severity and certainty of punishment on the crime rate. The “strike hard” campaigns increase the severity of the punishment but keep the certainty of the punishment unchanged. We use provincial panel data from 1988 to 2015 to examine the impacts of the two strategies on the crime rate with pooled mean group models. The empirical results show that a significant decrease in crime rates is associated with greater certainty of detection, but greater severity has no significant effect. A 1% increase in the detection rate (a measurement of certainty) predicts about 2.7% lower crime rate. The results are robust even after considering the endogenous nature of punishment policies and controlling for the measurement error in the officially reported data.


Sign in / Sign up

Export Citation Format

Share Document