Data quality in online surveys: Essays on improving respondent participation and response effort

2021 ◽  
Author(s):  
◽  
Kylie Anne Brosnan
2018 ◽  
Vol 60 (1) ◽  
pp. 32-49 ◽  
Author(s):  
Mingnan Liu ◽  
Laura Wronski

This study examines the use of trap questions as indicators of data quality in online surveys. Trap questions are intended to identify respondents who are not paying close attention to survey questions, which would mean that they are providing sub-optimal responses to not only the trap question itself but to other questions included in the survey. We conducted three experiments using an online non-probability panel. In the first experiment, we examine whether there is any difference in responses to surveys with one trap question as those that have two trap questions. In the second study, we examine responses to surveys with trap questions of varying difficulty. In the third experiment, we test the level of difficulty, the placement of the trap question, and other forms of attention checks. In all studies, we correlate the responses to the trap question(s) with other data quality checks, most of which were derived from the literature on satisficing. Also, we compare the responses to several substance questions by the response to the trap questions. This would tell us whether participants who failed the trap questions gave consistently different answers from those who passed the trap questions. We find that the rate of passing/failing various trap questions varies widely, from 27% to 87% among the types we tested. We also find evidence that some types of trap questions are more significantly correlated with other data quality measures.


2018 ◽  
Vol 37 (3) ◽  
pp. 435-445
Author(s):  
Rebecca Hofstein Grady ◽  
Rachel Leigh Greenspan ◽  
Mingnan Liu

Across two studies, we aimed to determine the row and column size in matrix-style questions that best optimizes participant experience and data quality for computer and mobile users. In Study 1 ( N = 2,492), respondents completed 20 questions (comprising four short scales) presented in a matrix grid (converted to item-by-item format on mobile phones). We varied the number of rows (5, 10, or 20) and columns (3, 5, or 7) of the matrix on each page. Outcomes included both data quality (straightlining, item skip rate, and internal reliability of scales) and survey experience measures (dropout rate, rating of survey experience, and completion time). Results for row size revealed dropout rate and reported survey difficulty increased as row size increased. For column size, seven columns increased the completion time of the survey, while three columns produced lower scale reliability. There was no interaction between row and column size. The best overall size tested was a 5 × 5 matrix. In Study 2 ( N = 2,570), we tested whether the effects of row size replicated when using a single 20-item scale that crossed page breaks and found that participant survey ratings were still best in the five-row condition. These results suggest that having around five rows or potentially fewer per page, and around five columns for answer options, gives the optimal survey experience, with equal or better data quality, when using matrix-style questions in an online survey. These recommendations will help researchers gain the benefits of using matrices in their surveys with the least downsides of the format.


2011 ◽  
Vol 53 (3) ◽  
pp. 369-390 ◽  
Author(s):  
Elisabeth Brüggen ◽  
Martin Wetzels ◽  
Ko De Ruyter ◽  
Niels Schillewaert

The majority of online research is now conducted via discontinuous online access panels, which promise high response rates, sampling control, access to populations that are hard to reach, and detailed information about respondents. To sustain a critical mass of respondents, overcome panel attrition and recruit new panel members, marketers must understand how they can predict and explain what motivates people to participate repeatedly in online surveys. Using the newly developed survey participation inventory (SPI) measure, we identify three clusters of participants, characterised as voicing assistants, reward seekers and intrinsics. Our results suggest that most online surveys are filled out by intrinsically motivated respondents that show higher participation rates, response effort and performance; incentives do not offer an important response motive.


2021 ◽  
pp. 147078532098182
Author(s):  
Catherine A Roster

This study explored the influence of Internet memes, specifically image macros of animals with motivational captions, on survey respondents’ engagement with the survey-taking experience and subsequent data quality. A web-based field experiment was conducted with online survey respondents from two sample sources, one crowdsourced, and one commercially managed online panel. Half of the respondents from each sample source were randomly selected to see the memes at various points throughout the survey; the other half did not. Direct and indirect measures of survey engagement and response quality were used to assess effectiveness of the memes. Quantitative results were inconclusive, with few significant differences found in measures of engagement and data quality between respondents in the meme or control condition in either sample source. However, qualitative open-ended comments from respondents who saw the memes in both sample groups revealed that memes provide respondents a fun break and relief from the cognitive burdens of answering online survey questions. In conclusion, memes represent a relatively inexpensive and easy way for survey researchers to connect with respondents and show appreciation for their time and effort.


Field Methods ◽  
2021 ◽  
pp. 1525822X2110122
Author(s):  
Carmen M. Leon ◽  
Eva Aizpurua ◽  
Sophie van der Valk

Previous research shows that the direction of rating scales can influence participants’ response behavior. Studies also suggest that the device used to complete online surveys might affect the susceptibility to these effects due to the different question layouts (e.g., horizontal grids vs. vertical individual questions). This article contributes to previous research by examining scale direction effects in an online multi-device survey conducted with panelists in Spain. In this experiment, respondents were randomly assigned to two groups where the scale direction was manipulated (incremental vs. decremental). Respondents completed the questionnaire using the device of their choosing (57.8% used PCs; 36.5% used smartphones; and 5.7% used tablets). The results show that scale direction influenced response distributions but did not significantly affect data quality. In addition, our findings indicate that scale direction effects were comparable across devices. Findings are discussed and implications are highlighted.


2020 ◽  
pp. 089443932090706
Author(s):  
Philipp E. Sischka ◽  
Jean Philippe Décieux ◽  
Alexandra Mergener ◽  
Kristina M. Neufang ◽  
Alexander F. Schmidt

Forced answering (FA) is a frequent answer format in online surveys that forces respondents to answer each question in order to proceed through the questionnaire. The underlying rationale is to decrease the amount of missing data. Despite its popularity, empirical research on the impact of FA on respondents’ answering behavior is scarce and has generated mixed findings. In fact, some quasi-experimental studies showed that FA has detrimental consequences such as increased survey dropout rates and faking behavior. Notably, a theoretical psychological process driving these effects has hitherto not been identified. Therefore, the aim of the present study was twofold: First, we sought to experimentally replicate detrimental effects of FA on online questionnaire data quality. Second, we tried to uncover an explanatory psychological mechanism. Specifically, we hypothesized that FA effects are mediated through reactance. Zero-order effects showed that FA increased state reactance and questionnaire dropout as well as reduced answer length in open-ended questions. Results of survival and mediation analyses corroborate negative FA effects on data quality and the proposed psychological process.


2012 ◽  
Vol 54 (5) ◽  
pp. 613-633 ◽  
Author(s):  
Theo Downes-Le Guin ◽  
Reg Baker ◽  
Joanne Mechling ◽  
Erica Ruyle

This paper describes an experiment in which a single questionnaire was fielded in four different styles of presentation: Text Only, Decoratively Visual, Functionally Visual and Gamified. Respondents were randomly assigned to only one presentation version. To understand the effect of presentation style on survey experience and data quality, we compared response distributions, respondent behaviour (such as time to complete), and self-reports regarding the survey experience and level of engagement across the four experimental presentations. While the functionally visual and gamified treatments produced higher satisfaction scores from respondents, we found no real differences in respondent engagement measures. We also found few differences in response patterns.


Author(s):  
Kartik Pashupati ◽  
Pushkala Raman

This chapter presents an overview of gamification in the domain of market research, with a specific focus on digital data collection methods, such as online surveys. The problems faced by the market research industry are outlined, followed by a discussion of why gamification has been offered as a possible way to overcome some of these challenges. The literature on gamification is reviewed, with a focus on results from empirical studies investigating the impact of gamification on outcome variables such as data quality and respondent engagement. Finally, the authors present results from an original study conducted in 2013, comparing differences between a conventional (text-dominant) survey and a gamified version of the same survey.


Psihologija ◽  
2015 ◽  
Vol 48 (4) ◽  
pp. 311-326 ◽  
Author(s):  
Jean Décieux ◽  
Alexandra Mergener ◽  
Kristina Neufang ◽  
Philipp Sischka

Online surveys have become a popular method for data gathering for many reasons, including low costs and the ability to collect data rapidly. However, online data collection is often conducted without adequate attention to implementation details. One example is the frequent use of the forced answering option, which forces the respondent to answer each question in order to proceed through the questionnaire. The avoidance of missing data is often the idea behind the use of the forced answering option. However, we suggest that the costs of a reactance effect in terms of quality reduction and unit nonresponse may be high because respondents typically have plausible reasons for not answering questions. The objective of the study reported in this paper was to test the influence of forced answering on dropout rates and data quality. The results show that requiring participants answer every question increases dropout rates and decreases quality of answers. Our findings suggest that the desire for a complete data set has to be balanced against the consequences of reduced data quality.


Sign in / Sign up

Export Citation Format

Share Document