scholarly journals Reliability, Validity, and Feasibility of a Computer-Based Geriatric Assessment for Older Adults With Cancer

2016 ◽  
Vol 12 (12) ◽  
pp. e1025-e1034 ◽  
Author(s):  
Arti Hurria ◽  
Chie Akiba ◽  
Jerome Kim ◽  
Dale Mitani ◽  
Matthew Loscalzo ◽  
...  

Purpose: The goal of this study was to evaluate the feasibility, reliability, and validity of a computer-based geriatric assessment via two methods of electronic data capture ( SupportScreen and REDCap) compared with paper-and-pencil data capture among older adults with cancer. Methods: Eligible patients were ≥ 65 years old, had a cancer diagnosis, and were fluent in English. Patients were randomly assigned to one of four arms, in which they completed the geriatric assessment twice: (1) REDCap and paper and pencil in sessions 1 and 2; (2) REDCap in both sessions; (3) SupportScreen and paper and pencil in sessions 1 and 2; and (4) SupportScreen in both sessions. The feasibility, reliability, and validity of the computer-based geriatric assessment compared with paper and pencil were evaluated. Results: The median age of participants (N = 100) was 71 years (range, 65 to 91 years) and the diagnosis was solid tumor (82%) or hematologic malignancy (18%). For session 1, REDCap took significantly longer to complete than paper and pencil (median, 21 minutes [range, 11 to 44 minutes] v median, 15 minutes [range, 9 to 29 minutes], P < .01) or SupportScreen (median, 16 minutes [range, 6 to 38 minutes], P < .01). There were no significant differences in completion times between SupportScreen and paper and pencil ( P = .50). The computer-based geriatric assessment was feasible. Few participants (8%) needed help with completing the geriatric assessment (REDCap, n = 7 and SupportScreen, n = 1), 89% reported that the length was “just right,” and 67% preferred the computer-based geriatric assessment to paper and pencil. Test–retest reliability was high (Spearman correlation coefficient ≥ 0.79) for all scales except for social activity. Validity among similar scales was demonstrated. Conclusion: Delivering a computer-based geriatric assessment is feasible, reliable, and valid. SupportScreen methodology is preferred to REDCap.

2014 ◽  
Vol 20 (6) ◽  
pp. 588-598 ◽  
Author(s):  
Robert K. Heaton ◽  
Natacha Akshoomoff ◽  
David Tulsky ◽  
Dan Mungas ◽  
Sandra Weintraub ◽  
...  

AbstractThis study describes psychometric properties of the NIH Toolbox Cognition Battery (NIHTB-CB) Composite Scores in an adult sample. The NIHTB-CB was designed for use in epidemiologic studies and clinical trials for ages 3 to 85. A total of 268 self-described healthy adults were recruited at four university-based sites, using stratified sampling guidelines to target demographic variability for age (20–85 years), gender, education, and ethnicity. The NIHTB-CB contains seven computer-based instruments assessing five cognitive sub-domains: Language, Executive Function, Episodic Memory, Processing Speed, and Working Memory. Participants completed the NIHTB-CB, corresponding gold standard validation measures selected to tap the same cognitive abilities, and sociodemographic questionnaires. Three Composite Scores were derived for both the NIHTB-CB and gold standard batteries: “Crystallized Cognition Composite,” “Fluid Cognition Composite,” and “Total Cognition Composite” scores. NIHTB Composite Scores showed acceptable internal consistency (Cronbach’s alphas=0.84 Crystallized, 0.83 Fluid, 0.77 Total), excellent test–retest reliability (r: 0.86–0.92), strong convergent (r: 0.78–0.90) and discriminant (r: 0.19–0.39) validities versus gold standard composites, and expected age effects (r=0.18 crystallized, r=−0.68 fluid, r=−0.26 total). Significant relationships with self-reported prior school difficulties and current health status, employment, and presence of a disability provided evidence of external validity. The NIH Toolbox Cognition Battery Composite Scores have excellent reliability and validity, suggesting they can be used effectively in epidemiologic and clinical studies. (JINS, 2014, 20, 1–11)


2015 ◽  
Vol 23 (2) ◽  
pp. 78E-87E
Author(s):  
N. Jennifer Klinedinst ◽  
Barbara Resnick

Background and Purpose: The purpose of this study is to test the reliability and validity of the 3-item Useful Depression Screening Tool (UDST) for use with older adults in congregate living settings. Methods: There were 176 residents of senior housing or assisted living who completed the UDST. Rasch analysis and test criterion relationships with pain, physical activity, and depression diagnosis were used to determine validity. Test–retest reliability was conducted with 29 senior housing residents. Results: Rasch analysis demonstrated good fit of all items to the concept of depression. Criterion validity was supported, F(5) = 14.17, p < .001. Test–retest showed no significant differences in UDST scores over time (p = .29). Conclusions: The findings provide support for the validity and reliability of the UDST for use with older adults in congregate living settings.


2008 ◽  
Vol 16 (3) ◽  
pp. 292-315 ◽  
Author(s):  
Dawn P. Gill ◽  
Gareth R. Jones ◽  
GuangYong Zou ◽  
Mark Speechley

The purpose of this study was to develop a brief physical activity interview for older adults (Phone-FITT) and evaluate its test–retest reliability and validity. Summary scores were derived for household, recreational, and total PA. Reliability was evaluated in a convenience sample from a fall-prevention study (N= 43, 79.4 ± 2.9 years, 51% male), and validity, in a random sample of individuals in older adult exercise programs (N= 48, 77.4 ± 4.7 years, 25% male). Mean time to complete the Phone-FITT was 10 min for participants sampled from exercise programs. Evaluation of test–retest reliability indicated substantial to almost perfect agreement for all scores, with intraclass correlation coefficients (95% confidence intervals) ranging from .74 (.58–.85) to .88 (.8–.94). For validity, Spearman’s rho correlations of Phone-FITT scores with accelerometer counts ranged from .29 (.01–.53) to .57 (.34–.73). Correlations of Phone-FITT recreational scores with age and seconds to complete a self-paced step test ranged from –.29 (–.53 to –.01) to –.45 (–.68 to –.14). This study contributes preliminary evidence of the reliability and validity of the Phone-FITT.


2015 ◽  
Vol 12 (5) ◽  
pp. 727-732 ◽  
Author(s):  
Keith P. Gennuso ◽  
Charles E. Matthews ◽  
Lisa H. Colbert

Background:The purpose of this study was to examine the reliability and validity of 2 currently available physical activity surveys for assessing time spent in sedentary behavior (SB) in older adults.Methods:Fifty-eight adults (≥65 years) completed the Yale Physical Activity Survey for Older Adults (YPAS) and Community Health Activities Model Program for Seniors (CHAMPS) before and after a 10-day period during which they wore an ActiGraph accelerometer (ACC). Intraclass correlation coefficients (ICC) examined test-retest reliability. Overall percent agreement and a kappa statistic examined YPAS validity. Lin’s concordance correlation, Pearson correlation, and Bland-Altman analysis examined CHAMPS validity.Results:Both surveys had moderate test-retest reliability (ICC: YPAS = 0.59 (P < .001), CHAMPS = 0.64 (P < .001)) and significantly underestimated SB time. Agreement between YPAS and ACC was low (κ = −0.0003); however, there was a linear increase (P < .01) in ACC-derived SB time across YPAS response categories. There was poor agreement between ACC-derived SB and CHAMPS (Lin’s r = .005; 95% CI, −0.010 to 0.020), and no linear trend across CHAMPS quartiles (P = .53).Conclusions:Neither of the surveys should be used as the sole measure of SB in a study; though the YPAS has the ability to rank individuals, providing it with some merit for use in correlational SB research.


2007 ◽  
Vol 15 (2) ◽  
pp. 184-194 ◽  
Author(s):  
Marissa E. Mendelsohn ◽  
Denise M. Connelly ◽  
Tom J. Overend ◽  
Robert J. Petrella

Although popular in clinical settings, little is known about the utility of all-extremity semirecumbent exercise machines for research. Twenty-one community-dwelling older adults performed two exercise trials (three 4-min stages at increasing workloads) to evaluate the reliability and validity of exercise responses to submaximal all-extremity semirecumbent exercise (BioStep). Exercise responses were measured directly (Cosmed K4b2) and indirectly through software on the BioStep. Test–retest reliability (ICC2,1) was moderate to high across all three stages for directly measured METs (.92, .87, and .88) and HR (.91, .83, and .86). Concurrent criterion validity between the K4b2and BioStep MET values was moderate to very good across the three stages on both Day 1 (r= .86, .71, and .83) and Day 2 (r= .73, .87, and .72). All-extremity semirecumbent submaximal exercise elicited reliable and valid responses in our sample of older adults and thus can be considered a viable exercise mode.


2019 ◽  
Vol 32 ◽  
Author(s):  
Larissa Alamino Pereira de Viveiro ◽  
André Finotti Lagos Ferreira ◽  
José Eduardo Pompeu

Abstract Introduction: Falls are an important adverse event among older adults. The St. Thomas’s Falls Risk Assessment Tool in Older Adults (STRATIFY) is a tool to assess the risk of falls; however, it is not translated and adapted to Portuguese. Objective: To translate and perform a cross-cultural adaptation of STRATIFY in Brazilian Portuguese, as well as to test the reliability and validity of the instrument. Method: The cross-cultural adaptation process was carried out in six stages: A) T1 and T2 translations; B) synthesis of translations (T12); C) T12 back translations (RT1 and RT2); D) expert committee review; E) pretesting of the version approved by the committee; F) adapted version of STRATIFY for Brazilian Portuguese. Inter-rater and test-retest reliability were performed using the intraclass correlation coefficient (ICC) and 95% confidence interval (CI). Validity was assessed by the Spearman’s correlation coefficient of the STRATIFY with the Morse Fall Scale (MFS). Data analysis was performed by the Microsoft Office Excel 2016 (translation and adaptation) and by the IBM SPSS Statistics 20.0 (reliability and validity). We used a level of significance of p<0.05. Results: Data were presented about the perception of 33 health professionals on the adapted version of STRATIFY. The following ICC and CI were found for inter-rater and test-retest reliability, respectively: ICC=0.729; CI=0.525-0.845 and ICC=0.876; CI=0.781-0.929. STRATIFY and MFS showed a moderate but significant correlation (ρ=0.50, p<0.001). Conclusion: The translated and adapted version of the STRATIFY presented moderate inter-rater reliability and good test-retest reliability, in addition to a moderate correlation to the MFS.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Alison Douglas ◽  
Lori Letts ◽  
Kevin Eva ◽  
Julie Richardson

Objectives. Defining and validating a measure of safety contributes to further validation of clinical measures. The objective was to define and examine the psychometric properties of the outcome “incidents of harm.”Methods. The Incident of Harm Caregiver Questionnaire was administered to caregivers of older adults discharged from hospital by telephone. Caregivers completed daily logs for one month and medical charts were examined.Results. Test-retest reliability (n=38) was high for the occurrence of an incident of harm (yes/no; kappa = 1.0) and the type of incident (agreement = 100%). Validation against daily logs found no disagreement regarding occurrence or types of incidents. Validation with medical charts found no disagreement regarding incident occurrence and disagreement in half regarding incident type.Discussion. The data support the Incident of Harm Caregiver Questionnaire as a reliable and valid estimation of incidents for this sample and are important to researchers as a method to measure safety when validating clinical measures.


2019 ◽  
Vol 25 (14) ◽  
pp. 1848-1869 ◽  
Author(s):  
Curtis M Wojcik ◽  
Meghan Beier ◽  
Kathleen Costello ◽  
John DeLuca ◽  
Anthony Feinstein ◽  
...  

Background: The proliferation of computerized neuropsychological assessment devices (CNADs) for screening and monitoring cognitive impairment is increasing exponentially. Previous reviews of computerized tests for multiple sclerosis (MS) were primarily qualitative and did not rigorously compare CNADs on psychometric properties. Objective: We aimed to systematically review the literature on the use of CNADs in MS and identify test batteries and single tests with good evidence for reliability and validity. Method: A search of four major online databases was conducted for publications related to computerized testing and MS. Test–retest reliability and validity coefficients and effect sizes were recorded for each CNAD test, along with administration characteristics. Results: We identified 11 batteries and 33 individual tests from 120 peer-reviewed articles meeting the inclusion criteria. CNADs with the strongest psychometric support include the CogState Brief Battery, Cognitive Drug Research Battery, NeuroTrax, CNS-Vital Signs, and computer-based administrations of the Symbol Digit Modalities Test. Conclusion: We identified several CNADs that are valid to screen for MS-related cognitive impairment, or to supplement full, conventional neuropsychological assessment. The necessity of testing with a technician, and in a controlled clinic/laboratory environment, remains uncertain.


Sign in / Sign up

Export Citation Format

Share Document