scholarly journals Respondent Burden Effects on Item Non-Response and Careless Response Rates: An Analysis of Two Types of Surveys

Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2035
Author(s):  
Álvaro Briz-Redón

The respondent burden refers to the effort required by a respondent to answer a questionnaire. Although this concept was introduced decades ago, few studies have focused on the quantitative detection of such a burden. In this paper, a face-to-face survey and a telephone survey conducted in Valencia (Spain) are analyzed. The presence of burden is studied in terms of both item non-response rates and careless response rates. In particular, two moving-window statistics based on the coefficient of unalikeability and the average longstring index are proposed for characterizing careless responding. Item non-response and careless response rates are modeled for each survey by using mixed-effects models, including respondent-level and question-level covariates and also temporal random effects to assess the existence of respondent burden during the questionnaire. The results suggest that the sociodemographic characteristics of the respondents and the typology of the question impact item non-response and careless response rates. Moreover, the estimates of the temporal random effects indicate that item non-response and careless response rates are time-varying, suggesting the presence of respondent burden. In particular, an increasing trend in item non-response rates in the telephone survey has been found, which supports the hypothesis of the burden. Regarding careless responding, despite the presence of some temporal variation, no clear trend has been identified.

2020 ◽  
Vol 228 (1) ◽  
pp. 14-24 ◽  
Author(s):  
Tanja Burgard ◽  
Michael Bošnjak ◽  
Nadine Wedderhoff

Abstract. A meta-analysis was performed to determine whether response rates to online psychology surveys have decreased over time and the effect of specific design characteristics (contact mode, burden of participation, and incentives) on response rates. The meta-analysis is restricted to samples of adults with depression or general anxiety disorder. Time and study design effects are tested using mixed-effects meta-regressions as implemented in the metafor package in R. The mean response rate of the 20 studies fulfilling our meta-analytic inclusion criteria is approximately 43%. Response rates are lower in more recently conducted surveys and in surveys employing longer questionnaires. Furthermore, we found that personal invitations, for example, via telephone or face-to-face contacts, yielded higher response rates compared to e-mail invitations. As predicted by sensitivity reinforcement theory, no effect of incentives on survey participation in this specific group (scoring high on neuroticism) could be observed.


2017 ◽  
Author(s):  
Mirko Thalmann ◽  
Marcel Niklaus ◽  
Klaus Oberauer

Using mixed-effects models and Bayesian statistics has been advocated by statisticians in recent years. Mixed-effects models allow researchers to adequately account for the structure in the data. Bayesian statistics – in contrast to frequentist statistics – can state the evidence in favor of or against an effect of interest. For frequentist statistical methods, it is known that mixed models can lead to serious over-estimation of evidence in favor of an effect (i.e., inflated Type-I error rate) when models fail to include individual differences in the effect sizes of predictors ("random slopes") that are actually present in the data. Here, we show through simulation that the same problem exists for Bayesian mixed models. Yet, at present there is no easy-to-use application that allows for the estimation of Bayes Factors for mixed models with random slopes on continuous predictors. Here, we close this gap by introducing a new R package called BayesRS. We tested its functionality in four simulation studies. They show that BayesRS offers a reliable and valid tool to compute Bayes Factors. BayesRS also allows users to account for correlations between random effects. In a fifth simulation study we show, however, that doing so leads to slight underestimation of the evidence in favor of an actually present effect. We only recommend modeling correlations between random effects when they are of primary interest and when sample size is large enough. BayesRS is available under https://cran.r-project.org/web/packages/BayesRS/, R code for all simulations is available under https://osf.io/nse5x/?view_only=b9a7caccd26a4764a084de3b8d459388


Author(s):  
Diana Lewis ◽  
Heather Castleden ◽  
Sheila Francis ◽  
Kim Strickland ◽  
Colleen Denny

2019 ◽  
Vol 29 (Supplement_4) ◽  
Author(s):  
E Lilja ◽  
A Seppänen ◽  
H Kuusio

Abstract Background Previous population surveys among people with foreign background (PFB) in Finland have had successful response rates (62%-66%) when using mainly face-to-face interviews. A cross-sectional population survey (FinMONIK) explored more cost-efficient ways to collect the data on PFB. Methods The data collection was conducted in Finland between May 2018 and January 2019. The random sample consisted of 12 877 (after removing over-coverage) 18-64-year-olds stratified by region. First, a letter containing a link to the online survey with 18 different language options was sent to the participants. After two reminders, the questionnaire was sent twice on paper to the non-respondents. Finally, supplementary phone interviews were carried out by multi-lingual interviewers. All the participants were able to enter in a draw to win gift cards. Results The response rate (RR) for the online survey was 34%. RR was highest for those who had lived in Finland 5 years or less (43%) and lowest among the divorced (23%) and Estonians (27%). The paper questionnaire was mostly preferred by older age groups, increasing the RR of 40-64 year-olds from 31% to 48%. Telephone interviews increased the RR by five percent points, thus making the final RR for the survey 53%. Persons born in the EU and North-America responded the most frequently (58%) whereas RR was lowest amongst the Sub-Saharan African origin migrants (47%). RR was particularly low (42%) for those who had moved to Finland at ages 0-6. Conclusions In surveys conducted amongst PFB, relatively good response rates can be obtained by using alternate methods for gathering data instead of costly and time-consuming face-to-face interview. Age and marital status seemed to affect the preference of survey format. The overall RR varied by country of origin. Key messages A good response rate can be obtained without face-to-face interviews in migrant population surveys. Migrant population surveys can be conducted more efficiently by combining a variety of methods.


Author(s):  
Colin Mason ◽  
Tiago Botelho ◽  
Justyna Zygmunt

A major focus of research on business angels has examined their decision-making processes and investment criteria. As business angels reject most opportunities they receive, this article explores the reasons informing such decisions. In view of angel heterogeneity, investment opportunities might be expected to be rejected for differing reasons. Two sources of data are used to examine this issue. Face-to-face interviews with 30 business angels in Scotland and Northern Ireland provided information on typical ‘deal killers’. This was complemented by an Internet survey that attracted responses from 238 business angels from across the UK. The findings confirm that the main reason for rejection relates to the entrepreneur/management team. However, angel characteristics do not explain the number of reasons given for opportunity rejection nor do they predict the reasons for rejecting investment opportunities. This could be related to the increasing trend for business angels to join organised groups which, in turn, leads to the development of a shared repertoire of investment approaches. We therefore suggest the concept of ‘communities-of-practice’ as an explanation for this finding.


Sign in / Sign up

Export Citation Format

Share Document