Unit Nonresponse Bias in Inequality Measurement: Worldwide Analysis Using Luxembourg Income Study Database

2017 ◽  
Author(s):  
Vladimir Hlasny
2018 ◽  
Vol 37 (3) ◽  
pp. 404-424 ◽  
Author(s):  
Jessica M. E. Herzing ◽  
Annelies G. Blom

Research has shown that the non-Internet population is hesitant to respond to online survey requests. However, also subgroups in the Internet population with low digital affinity may hesitate to respond to online surveys. This latter issue has not yet received much attention by scholars despite its potentially detrimental effects on the external validity of online survey data. In this article, we explore the extent to which a person’s digital affinity contributes to nonresponse bias in the German Internet Panel, a probability-based online panel of the general population. With a multidimensional classification of digital affinity, we predict response to the first online panel wave and participation across panel waves. We find that persons who belong to different classes of digital affinity have systematically different sociodemographic characteristics and show different voting behavior. In addition, we find that initial response propensities vary by classes of digital affinity, as do attrition patterns over time. Our results demonstrate the importance of digital affinity for the reduction in nonresponse bias during fieldwork and for postsurvey adjustments.


Author(s):  
David Haziza ◽  
Sixia Chen ◽  
Yimeng Gao

Abstract In the presence of nonresponse, unadjusted estimators are vulnerable to nonresponse bias when the characteristics of the respondents differ from those of the nonrespondents. To reduce the bias, it is common practice to postulate a nonresponse model linking the response indicators and a set of fully observed variables. Estimated response probabilities are obtained by fitting the selected model, which are then used to adjust the base weights. The resulting estimator, referred to as the propensity score-adjusted estimator, is consistent provided the nonresponse model is correctly specified. In this article, we propose a weighting procedure that may improve the efficiency of propensity score estimators for survey variables identified as key variables by making a more extensive use of the auxiliary information available at the nonresponse treatment stage. Results from a simulation study suggest that the proposed procedure performs well in terms of efficiency when the data are missing at random and also achieves an efficient bias reduction when the data are not missing at random. We further apply our proposed methods to 2017–2018 National Health Nutrition and Examination Survey.


2005 ◽  
Author(s):  
Anton Korinek ◽  
Johan A. Mistiaen ◽  
Martin Ravallion

2013 ◽  
Vol 29 (3) ◽  
pp. 329-353 ◽  
Author(s):  
J. Michael Brick

Abstract This article reviews unit nonresponse in cross-sectional household surveys, the consequences of the nonresponse on the bias of the estimates, and methods of adjusting for it. We describe the development of models for nonresponse bias and their utility, with particular emphasis on the role of response propensity modeling and its assumptions. The article explores the close connection between data collection protocols, estimation strategies, and the resulting nonresponse bias in the estimates. We conclude with some comments on the current state of the art and the need for future developments that expand our understanding of the response phenomenon.


2007 ◽  
Vol 136 (1) ◽  
pp. 213-235 ◽  
Author(s):  
Anton Korinek ◽  
Johan A. Mistiaen ◽  
Martin Ravallion

2016 ◽  
Vol 21 (2) ◽  
pp. 293-298 ◽  
Author(s):  
Yeongbae Choe ◽  
Daniel R. Fesenmaier

2014 ◽  
Author(s):  
Fawzi Al-Nassir ◽  
Eric Falk ◽  
Owen Hung ◽  
Shoshana Magazine ◽  
Timothy Markheim ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document