scholarly journals Impact of survey length and compensation on validity, reliability, and sample characteristics for Ultrashort-, Short-, and Long-Research Participant Perception Surveys

2018 ◽  
Vol 2 (1) ◽  
pp. 31-37 ◽  
Author(s):  
Rhonda G. Kost ◽  
Joel Correa da Rosa

IntroductionThe validated long Research Participant Perception Survey (RPPS-Long) elicits valuable data at modest response rates.MethodsTo address this limitation, we developed shorter RPPS-Ultrashort and RPPS-Short versions, fielded them with the RPPS-Long to a random sample of a national research volunteer registry, and assessed response and completion rates, test/retest reliability, and demographics.ResultsIn total, 2228 eligible registry members received survey links. Response rates were 64% (RPPS-Ultrashort), 63% (RPPS-Short), and 51% (RPPS-Long), respectively (p<0.001). Completion rates were 63%, 54%, and 37%, respectively (p<0.001). All surveys were reliable with Cronbach α=0.81, 0.84, and 0.87, respectively. Retest reliability was highest for RPPS-short (κ=0.85). Provision of compensation increased RPPS-short completion rate from 54% to 71% (p<0.001). Compensated respondents were younger (p<0.001), with greater minority representation (p=0.03).ConclusionsShorter surveys were reliable and produced higher response and completion rates then long surveys. Compensation further increased completion rates and shifted sample age and race profiles.

2018 ◽  
Vol 2 (3) ◽  
pp. 163-168 ◽  
Author(s):  
Issis J. Kelly-Pumarol ◽  
Perrin Q. Henderson ◽  
Julia T. Rushing ◽  
Joseph E. Andrews ◽  
Rhonda G. Kost ◽  
...  

IntroductionThe patient portal may be an effective method for administering surveys regarding participant research experiences but has not been systematically studied.MethodsWe evaluated 4 methods of delivering a research participant perception survey: mailing, phone, email, and patient portal. Participants of research studies were identified (n=4013) and 800 were randomly selected to receive a survey, 200 for each method. Outcomes included response rate, survey completeness, and cost.ResultsAmong those aged <65 years, response rates did not differ between mail, phone, and patient portal (22%, 29%, 30%,p>0.07). Among these methods, the patient portal was the lowest-cost option. Response rates were significantly lower using email (10%,p<0.01), the lowest-cost option. In contrast, among those aged 65+ years, mail was superior to the electronic methods (p<0.02).ConclusionsThe patient portal was among the most effective ways to reach research participants, and was less expensive than surveys administered by mail or telephone.


2021 ◽  
Vol 12 ◽  
Author(s):  
Justin Mason ◽  
Sherrilene Classen ◽  
James Wersal ◽  
Virginia Sisiopiku

Fully automated vehicles (AVs) hold promise toward providing numerous societal benefits including reducing road fatalities. However, we are uncertain about how individuals’ perceptions will influence their ability to accept and adopt AVs. The 28-item Automated Vehicle User Perception Survey (AVUPS) is a visual analog scale that was previously constructed, with established face and content validity, to assess individuals’ perceptions of AVs. In this study, we examined construct validity, via exploratory factor analysis and subsequent Mokken scale analyses. Next, internal consistency was assessed via Cronbach’s alpha (α) and 2-week test–retest reliability was assessed via Spearman’s rho (ρ) and intraclass correlation coefficient (ICC). The Mokken scale analyses resulted in a refined 20-item AVUPS and three Mokken subscales assessing specific domains of adults’ perceptions of AVs: (a) Intention to use; (b) perceived barriers; and (c) well-being. The Mokken scale analysis showed that all item-coefficients of homogeneity (H) exceeded 0.3, indicating that the items reflect a single latent variable. The AVUPS indicated a strong Mokken scale (Hscale = 0.51) with excellent internal consistency (α = 0.95) and test–retest reliability (ρ = 0.76, ICC = 0.95). Similarly, the three Mokken subscales ranged from moderate to strong (range Hscale = 0.47–0.66) and had excellent internal consistency (range α = 0.84–0.94) and test–retest reliability (range ICC = 0.84–0.93). The AVUPS and three Mokken subscales of AV acceptance were validated in a moderate sample size (N = 312) of adults living in the United States. Two-week test–retest reliability was established using a subset of Amazon Mechanical Turk participants (N = 84). The AVUPS, or any combination of the three subscales, can be used to validly and reliably assess adults’ perceptions before and after being exposed to AVs. The AVUPS can be used to quantify adults’ acceptance of fully AVs.


Author(s):  
Matthew L. Hall ◽  
Stephanie De Anda

Purpose The purposes of this study were (a) to introduce “language access profiles” as a viable alternative construct to “communication mode” for describing experience with language input during early childhood for deaf and hard-of-hearing (DHH) children; (b) to describe the development of a new tool for measuring DHH children's language access profiles during infancy and toddlerhood; and (c) to evaluate the novelty, reliability, and validity of this tool. Method We adapted an existing retrospective parent report measure of early language experience (the Language Exposure Assessment Tool) to make it suitable for use with DHH populations. We administered the adapted instrument (DHH Language Exposure Assessment Tool [D-LEAT]) to the caregivers of 105 DHH children aged 12 years and younger. To measure convergent validity, we also administered another novel instrument: the Language Access Profile Tool. To measure test–retest reliability, half of the participants were interviewed again after 1 month. We identified groups of children with similar language access profiles by using hierarchical cluster analysis. Results The D-LEAT revealed DHH children's diverse experiences with access to language during infancy and toddlerhood. Cluster analysis groupings were markedly different from those derived from more traditional grouping rules (e.g., communication modes). Test–retest reliability was good, especially for the same-interviewer condition. Content, convergent, and face validity were strong. Conclusions To optimize DHH children's developmental potential, stakeholders who work at the individual and population levels would benefit from replacing communication mode with language access profiles. The D-LEAT is the first tool that aims to measure this novel construct. Despite limitations that future work aims to address, the present results demonstrate that the D-LEAT represents progress over the status quo.


1982 ◽  
Vol 25 (4) ◽  
pp. 521-527 ◽  
Author(s):  
David C. Shepherd

In 1977, Shepherd and colleagues reported significant correlations (–.90, –.91) between speechreading scores and the latency of a selected negative peak (VN 130 measure) on the averaged visual electroencephalic wave form. The primary purpose of this current study was to examine the stability, or repeatability, of this relation between these cognitive and neurophysiologic measures over a period of several months and thus support its test-retest reliability. Repeated speechreading word and sentence scores were gathered during three test-retest sessions from each of 20 normal-hearing adults. An average of 56 days occurred from the end of one to the beginning of another speechreading sessions. During each of four other test-retest sessions, averaged visual electroencephalic responses (AVER s ) were evoked from each subject. An average of 49 clays intervened between AVER sessions. Product-moment correlations computed among repeated word scores and VN l30 measures ranged from –.61 to –.89. Based on these findings, it was concluded that the VN l30 measure of visual neural firing time is a reliable correlate of speech-reading in normal-hearing adults.


2000 ◽  
Vol 16 (1) ◽  
pp. 53-58 ◽  
Author(s):  
Hans Ottosson ◽  
Martin Grann ◽  
Gunnar Kullgren

Summary: Short-term stability or test-retest reliability of self-reported personality traits is likely to be biased if the respondent is affected by a depressive or anxiety state. However, in some studies, DSM-oriented self-reported instruments have proved to be reasonably stable in the short term, regardless of co-occurring depressive or anxiety disorders. In the present study, we examined the short-term test-retest reliability of a new self-report questionnaire for personality disorder diagnosis (DIP-Q) on a clinical sample of 30 individuals, having either a depressive, an anxiety, or no axis-I disorder. Test-retest scorings from subjects with depressive disorders were mostly unstable, with a significant change in fulfilled criteria between entry and retest for three out of ten personality disorders: borderline, avoidant and obsessive-compulsive personality disorder. Scorings from subjects with anxiety disorders were unstable only for cluster C and dependent personality disorder items. In the absence of co-morbid depressive or anxiety disorders, mean dimensional scores of DIP-Q showed no significant differences between entry and retest. Overall, the effect from state on trait scorings was moderate, and it is concluded that test-retest reliability for DIP-Q is acceptable.


2013 ◽  
Author(s):  
Kristen M. Dahlin-James ◽  
Emily J. Hennrich ◽  
E. Grace Verbeck-Priest ◽  
Jan E. Estrellado ◽  
Jessica M. Stevens ◽  
...  

2018 ◽  
Vol 30 (12) ◽  
pp. 1652-1662 ◽  
Author(s):  
Sophie J. M. Rijnen ◽  
Sophie D. van der Linden ◽  
Wilco H. M. Emons ◽  
Margriet M. Sitskoorn ◽  
Karin Gehring

Sign in / Sign up

Export Citation Format

Share Document