scholarly journals Web survey paradata on response time outliers

2018 ◽  
Vol 15 (1) ◽  
Author(s):  
Miha Matjašič ◽  
Vasja Vehovar ◽  
Katja Lozar Manfreda

In the last two decades, survey researchers have intensively used computerised methods for the collection of different types of paradata, such as keystrokes, mouse clicks and response times, to evaluate and improve survey instruments as well as to understand the survey response process. With the growing popularity of web surveys, the importance of paradata has further increased. Within this context, response time measurement is the prevailing paradata approach. Papers typically analyse the time (measured in milliseconds or seconds) a respondent needs to answer a certain item, question, page or questionnaire. One of the key challenges when analysing the response time is to identify and separate units that are answering too quickly or too slowly. These units can have a poor response quality and are typically labelled as response time outliers. This paper focuses on approaches for identifying and processing response time outliers. It presents a systematic overview of scientific papers on response time outliers in web surveys. The key observed characteristics of the papers are the approaches used, the level of time measurement, the processing of response time outliers and the relationship between response time and response quality. The results show that knowledge on response time outliers is scattered, inconsistent and lacking systematic comparisons of approaches. Consequently, there is a need to improve and upgrade the knowledge on this issue and to develop new approaches that will overcome existing deficiencies and inconsistencies in identifying and dealing with response time outliers.

Author(s):  
Isabel de la Torre Díez ◽  
Francisco Javier Díaz Pernas ◽  
Miguel López Coronado ◽  
Roberto Hornero Sánchez ◽  
María Isabel López Gálvez ◽  
...  

Response time measurement of a Web system is critically important to evaluate its performance. This response time is one of the main barriers usually found in the implementation of an effective Electronic Health Records (EHRs) system. The database selected will affect the system performance. This paper presents a comparison of the response times of a EHRs Web system, TeleOftalWeb, using different databases. In order to calculate these times, M/M/1 queuing models is used. Four databases were selected: Oracle 10g, dbXML 2.0, Xindice 1.2, and eXist 1.1.1. The final objective of the comparison is choosing the database system resulting in the lowest response time to TeleOftalWeb.


2017 ◽  
Vol 36 (3) ◽  
pp. 369-378 ◽  
Author(s):  
Jan Karem Höhne ◽  
Stephan Schlosser

Web surveys are commonly used in social research because they are usually cheaper, faster, and simpler to conduct than other modes. They also enable researchers to capture paradata such as response times. Particularly, the determination of proper values to define outliers in response time analyses has proven to be an intricate challenge. In fact, to a certain degree, researchers determine them arbitrarily. In this study, we use “SurveyFocus (SF)”—a paradata tool that records the activity of the web-survey pages—to assess outlier definitions based on response time distributions. Our analyses reveal that these common procedures provide relatively sufficient results. However, they are unable to detect all respondents who temporarily leave the survey, causing bias in the response times. Therefore, we recommend a two-step procedure consisting of the utilization of SF and a common outlier definition to attain a more appropriate analysis and interpretation of response times.


Field Methods ◽  
2017 ◽  
Vol 29 (4) ◽  
pp. 365-382 ◽  
Author(s):  
Jan Karem Höhne ◽  
Stephan Schlosser ◽  
Dagmar Krebs

Measuring attitudes and opinions employing agree/disagree (A/D) questions is a common method in social research because it appears to be possible to measure different constructs with identical response scales. However, theoretical considerations suggest that A/D questions require a considerable cognitive processing. Item-specific (IS) questions, in contrast, offer content-related response categories, implying less cognitive processing. To investigate the respective cognitive effort and response quality associated with A/D and IS questions, we conducted a web-based experiment with 1,005 students. Cognitive effort was assessed by response times and answer changes. Response quality, in contrast, was assessed by different indicators such as dropouts. According to our results, single IS questions require higher cognitive effort than single A/D questions in terms of response times. Moreover, our findings show substantial differences in processing single and grid questions.


Author(s):  
Jaime R. Carbonell ◽  
Jerome I. Elkind ◽  
Raymond S. Nickerson

One of the most important problems in the design and/or operation of a computer utility is to obtain dynamical characteristics that are acceptable and convenient to the on-line user. This paper is concerned with the problems of access to the computer utility, response time and its effect upon conversational use of the computer, and the effects of load on the system. Primary attention is placed upon response time; rather than a single measure, a set of response times should be measured in a given computer utility, in correspondence to the different types of operations requested. It is assumed that the psychological value of short response time stems from a subjective cost measure of the user's own time, largely influenced by the value of concurrent tasks being postponed. A measure of cost (to the individual and/or his organization) of the time-on-line required to perform a task might thus be derived. More subtle is the problem of the user's acceptability of given response times. This acceptability is a function of the service requested (e.g., length of computation), and variability with respect to expectations due both to uncertainty in the user's estimation and to variations in the response time originated by variable loads on the system. An effort should be made by computer-utility designers to include dynamic characteristics (such as prediction of loads and their effects) among their design specifications.


2021 ◽  
pp. 089443932110329
Author(s):  
Amanda Fernández-Fontelo ◽  
Pascal J. Kieslich ◽  
Felix Henninger ◽  
Frauke Kreuter ◽  
Sonja Greven

Survey research aims to collect robust and reliable data from respondents. However, despite researchers’ efforts in designing questionnaires, survey instruments may be imperfect, and question structure not as clear as could be, thus creating a burden for respondents. If it were possible to detect such problems, this knowledge could be used to predict problems in a questionnaire during pretesting, inform real-time interventions through responsive questionnaire design, or to indicate and correct measurement error after the fact. Previous research has used paradata, specifically response times, to detect difficulties and help improve user experience and data quality. Today, richer data sources are available, for example, movements respondents make with their mouse, as an additional detailed indicator for the respondent–survey interaction. This article uses machine learning techniques to explore the predictive value of mouse-tracking data regarding a question’s difficulty. We use data from a survey on respondents’ employment history and demographic information, in which we experimentally manipulate the difficulty of several questions. Using measures derived from mouse movements, we predict whether respondents have answered the easy or difficult version of a question, using and comparing several state-of-the-art supervised learning methods. We have also developed a personalization method that adjusts for respondents’ baseline mouse behavior and evaluate its performance. For all three manipulated survey questions, we find that including the full set of mouse movement measures and accounting for individual differences in these measures improve prediction performance over response-time-only models.


2019 ◽  
Author(s):  
Emir Efendic ◽  
Philippe van de Calseyde ◽  
Anthony M Evans

Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.


2006 ◽  
Vol 53 (4-5) ◽  
pp. 439-447 ◽  
Author(s):  
L. Rieger ◽  
J. Alex ◽  
W. Gujer ◽  
H. Siegrist

A model for the response time of aeration systems at WWTPs is proposed. It includes the delays caused by the air supply system (consisting of blowers, throttles and pipes), the rise time of the air bubbles and all control loops except the master DO controller. Beside a description of the required step-change experiments, different approaches for model calibration are given depending on the available data. Moreover, the parameters for the oxygen transfer and the response time of the aeration system model are not clearly identifiable. The model can be used for simulation studies which compare different types of controllers under changing loading and process conditions. The results from full-scale experiments at three different plants show that the response times of the aeration systems are in the range of 4–5 min. Taking all processes and time constants into account, some 30 min are needed to reach a new steady state after a step change of the airflow rate.


2010 ◽  
Vol 1 (4) ◽  
pp. 66-78 ◽  
Author(s):  
Isabel de la Torre Díez ◽  
Francisco Javier Díaz Pernas ◽  
Miguel López Coronado ◽  
Roberto Hornero Sánchez ◽  
María Isabel López Gálvez ◽  
...  

Response time measurement of a Web system is critically important to evaluate its performance. This response time is one of the main barriers usually found in the implementation of an effective Electronic Health Records (EHRs) system. The database selected will affect the system performance. This paper presents a comparison of the response times of a EHRs Web system, TeleOftalWeb, using different databases. In order to calculate these times, M/M/1 queuing models is used. Four databases were selected: Oracle 10g, dbXML 2.0, Xindice 1.2, and eXist 1.1.1. The final objective of the comparison is choosing the database system resulting in the lowest response time to TeleOftalWeb.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 473 ◽  
Author(s):  
M. Hernaiz-Guijarro ◽  
J. C. Castro-Palacio ◽  
E. Navarro-Pardo ◽  
J. M. Isidro ◽  
P. Fernández-de-Córdoba

A classification methodology based on an experimental study is proposed towards a fast pre-diagnosis of attention deficit. Our sample consisted of school-aged children between 8 and 12 years from Valencia, Spain. The study was based on the response time (RT) to visual stimuli in computerized tasks. The process of answering consecutive questions usually follows an ex-Gaussian distribution of the RTs. Specifically, we seek to propose a simple automatic classification scheme of children based on the most recent evidence of the relationship between RTs and ADHD. Specifically, the prevalence percentage and reported evidence for RTs in relation to ADHD or to attention deficit symptoms were taken as reference in our study. We explain step by step how to go from the computer-based experiments and through the data analysis. Our desired aim is to provide a methodology to determine quickly those children who behave differently from the mean child in terms of response times and thus are potential candidates to be diagnosed for ADHD or any another cognitive disorder related to attention deficit. This is highly desirable as there is an urgent need for objective instruments to diagnose attention deficit symptomatology. Most of the methodologies available nowadays lead to an overdiagnosis of ADHD and are not based on direct measurement but on interviews of people related to the child such as parents or teachers. Although the ultimate diagnosis must be made by a psychologist, the selection provided by a methodology like ours could allow them to focus on assessing a smaller number of candidates which would help save time and other resources.


2012 ◽  
Vol 12 (5) ◽  
Author(s):  
Abdul-Fattah Mohamed Ali

The normalized settling time (ts /?) values of oscillatory  2nd-order systems, when subjected to a step-change forcing function  (SCFF), depend on the sensitivity of the measuring instrument employed  to indicate the response (± x%). An attempt is made to mathematically  relate ts /? to ± x% utilizing the exact, and a simplified, expression  for the lower boundary of the decay envelope (LBDE). The two obtained relationships were tested against the actual ts /? values for a settling band range of   ±1% ? ±x% ? ±6%, covering a damping  coefficient range of 0.1 ? ? ? 0.65. Although the relationships are not  exact, their general trend is a marginal overestimation of ts /?. The  relationship based on the simplified LBDE was chosen for being simpler and slightly more accurate of the two. This led to a suggested distinction between ts /? and the normalized response time (tR /?) with the latter assigned the value 5/ ? . The ratio ts /tR can thus be readily established for any ± x% value. ABSTRAK: Masa enapan ternormal (ts/?)  nilai ayunan system terbit kedua, apabila fungsi memaksa ubah berperingkat (step-change forcing function (SCFF)) dijalankan ke atasnya, bergantung kepada kepekaan alat pengukur yang digunakan untuk mengukur respons (± x%). Satu percubaan dijalankan secara matematik untuk mengaitkan ts/? to ± x% dengan mempergunakan ekspresi yang tepat dan mudah, pada sempadan bawah sampul reputan (lower boundary of the decay envelope (LBDE)). Dua hubungan yang diperolehi dikaji terhadap nilai ts/t sebenar untuk julat jalur enapan ±1% ? ±x% ? ±6%, melingkungi julat pekali redaman 0.1 ? ? ? 0.65. Walaupun hubungannya tidak tepat, trend umum merupakan penganggaran marginal ts/?. Hubungan berdasarkan LBDE adalah berdasarkan  LBDE yang telah dipermudahkan, ia dipilih kerana ianya senang dan agak tepat antara keduanya. Ini mendorong kepada perbezaan yang disarankan antara ts/? dan waktu respons ternormal (tR/?), dengan nilai 5/ ? yang ditetapkan kemudiannya.


Sign in / Sign up

Export Citation Format

Share Document