scholarly journals Evaluative Frailty Index for Physical Activity (EFIP): A Reliable and Valid Instrument to Measure Changes in Level of Frailty

2013 ◽  
Vol 93 (4) ◽  
pp. 551-561 ◽  
Author(s):  
Nienke M. de Vries ◽  
J. Bart Staal ◽  
Marcel G.M. Olde Rikkert ◽  
Maria W.G. Nijhuis-van der Sanden

BackgroundPhysical activity is assumed to be important in the prevention and treatment of frailty. It is unclear, however, to what extent frailty can be influenced because instruments designed to assess frailty have not been validated as evaluative outcome instruments in clinical practice.ObjectivesThe aims of this study were: (1) to develop a frailty index (ie, the Evaluative Frailty Index for Physical Activity [EFIP]) based on the method of deficit accumulation and (2) to test the clinimetric properties of the EFIP.DesignThe content of the EFIP was determined using a written Delphi procedure. Intrarater reliability, interrater reliability, and construct validity were determined in an observational study (n=24).MethodIntrarater reliability and interrater reliability were calculated using Cohen kappa and intraclass correlation coefficients (ICCs). Construct validity was determined by correlating the score on the EFIP with those on the Timed “Up & Go” Test (TUG), the Performance-Oriented Mobility Assessment (POMA), and the Cumulative Illness Rating Scale for Geriatrics (CIRS-G).ResultsFifty items were included in the EFIP. Interrater reliability (Cohen kappa=0.72, ICC=.96) and intrarater reliability (Cohen kappa=0.77 and 0.80, ICC=.93 and .98) were good. As expected, a fair to moderate correlation with the TUG, POMA, and CIRS-G was found (.61, −.70, and .66, respectively).LimitationsReliability and validity of the EFIP have been tested in a small sample. These and other clinimetric properties, such as responsiveness, will be assessed or reassessed in a larger study population.ConclusionThe EFIP is a reliable and valid instrument to evaluate the effect of physical activity on frailty in research and in clinical practice.

Author(s):  
James C. Borders ◽  
Jordanna S. Sevitz ◽  
Jaime Bauer Malandraki ◽  
Georgia A. Malandraki ◽  
Michelle S. Troche

Purpose The COVID-19 pandemic has drastically increased the use of telehealth. Prior studies of telehealth clinical swallowing evaluations provide positive evidence for telemanagement of swallowing. However, the reliability of these measures in clinical practice, as opposed to well-controlled research conditions, remains unknown. This study aimed to investigate the reliability of outcome measures derived from clinical swallowing tele-evaluations in real-world clinical practice (e.g., variability in devices and Internet connectivity, lack of in-person clinician assistance, or remote patient/caregiver training). Method Seven raters asynchronously judged clinical swallowing tele-evaluations of 12 movement disorders patients. Outcomes included the Timed Water Swallow Test (TWST), Test of Masticating and Swallowing Solids (TOMASS), and common observations of oral intake. Statistical analyses were performed to examine inter- and intrarater reliability, as well as qualitative analyses exploring patient and clinician-specific factors impacting reliability. Results Forty-four trials were included for reliability analyses. All rater dyads demonstrated “good” to “excellent” interrater reliability for measures of the TWST (intraclass correlation coefficients [ICCs] ≥ .93) and observations of oral intake (≥ 77% agreement). The majority of TOMASS outcomes demonstrated “good” to “excellent” interrater reliability (ICCs ≥ .84), with the exception of the number of bites (ICCs = .43–.99) and swallows (ICCs = .21–.85). Immediate and delayed intrarater reliability were “excellent” for most raters across all tasks, ranging between ICCs of .63 and 1.00. Exploratory factors potentially impacting reliability included infrequent instances of suboptimal video quality, reduced camera stability, camera distance, and obstruction of the patient's mouth during tasks. Conclusions Subjective observations of oral intake and objective measures taken from the TWST and the TOMASS can be reliably measured via telehealth in clinical practice. Our results provide support for the feasibility and reliability of telehealth for outpatient clinical swallowing evaluations during COVID-19 and beyond. Supplemental Material https://doi.org/10.23641/asha.13661378


2020 ◽  
Vol 48 (1) ◽  
pp. 94-100 ◽  
Author(s):  
Floranne C. Ernste ◽  
Christopher Chong ◽  
Cynthia S. Crowson ◽  
Tanaz A. Kermani ◽  
Orla Ni Mhuircheartaigh ◽  
...  

Objective.Patients with dermatomyositis (DM) and polymyositis (PM) have reduced muscle endurance.The aim of this study was to streamline the Functional Index-2 (FI-2) by developing the Functional Index-3 (FI-3) and to evaluate its measurement properties, content and construct validity, and intra- and interrater reliability.Methods.A dataset of the previously performed and validated FI-2 (n = 63) was analyzed for internal redundancy, floor, and ceiling effects. The content of the FI-2 was revised into the FI-3. Construct validity and intrarater reliability of FI-3 were tested on 43 DM and PM patients at 2 rheumatology centers. Interrater reliability was tested in 25 patients. The construct validity was compared with the Myositis Activities Profile (MAP), Health Assessment Questionnaire (HAQ), and Borg CR-10 using Spearman correlation coefficient.Results.Spearman correlation coefficients of 63 patients performing FI-3 revealed moderate to high correlations between shoulder flexion and hip flexion tasks and similar correlations with MAP and HAQ scores; there were lower correlations for neck flexion task. All FI-3 tasks had very low to moderate correlations with the Borg scale. Intraclass correlation coefficients (ICC) of FI-3 tasks for intrarater reliability (n = 25) were moderate to good (0.88–0.98). ICC of FI-3 tasks for interrater reliability (n = 17) were fair to good (range 0.83–0.96).Conclusion.The FI-3 is an efficient and valid method for clinically assessing muscle endurance in DM and PM patients. FI-3 construct validity is supported by the significant correlations between functional tasks and the MAP, HAQ, and Borg CR-10 scores.


2010 ◽  
Vol 11 (2) ◽  
pp. 113-124 ◽  
Author(s):  
Elizabeth Davis ◽  
Jane Galvin ◽  
Cheryl Soo

AbstractIntroduction:The ability to use both hands to interact with objects is required in daily activities and is therefore important to measure in clinical practice. The Assisting Hand Assessment (AHA) is unique in evaluating the function of a child or youth's assisting hand, through observing the spontaneous manipulation of objects during bimanual activity. The AHA was developed for children with unilateral motor impairment, and shows strong psychometric properties when used with children who have cerebral palsy (CP) or obstetric brachial plexus palsy (OBPP). The AHA is currently used in clinical practice with children who have an acquired brain injury (ABI), however there is limited research on the measurement properties of its use with this population.Objectives:The study aimed to determine the interrater and intrarater reliability of the AHA for children and youth with unilateral motor impairment following ABI. Methods: For interrater reliability, two occupational therapists (OT1 and OT2) independently rated the same 26 children and youth. For intrarater reliability, OT2 conducted a second assessment on the 26 participants 1 week later. Association between item scores on the AHA were analysed using weighted kappa (Kw), while intraclass correlation coefficients (ICCs) were used for domain and total scores.Results:The AHA items demonstrated good to excellent intrarater reliability (Kw= 0.67–1.00). Interrater reliability was good to excellent (Kw=0.60–0.84) for 20 of the 22 items of the AHA. Interrater and intrarater reliability coefficients for all domain and total scores were in the excellent range (ICC = 0.85–0.99).Conclusion:The current study indicates that the AHA shows good interrater and intrarater reliability when used with the paediatric ABI population. Findings provide preliminary support for the continued use of the AHA for children and youth with acquired hemiplegia.


2020 ◽  
Vol 80 (4) ◽  
pp. 808-820
Author(s):  
Cindy M. Walker ◽  
Sakine Göçer Şahin

The purpose of this study was to investigate a new way of evaluating interrater reliability that can allow one to determine if two raters differ with respect to their rating on a polytomous rating scale or constructed response item. Specifically, differential item functioning (DIF) analyses were used to assess interrater reliability and compared with traditional interrater reliability measures. Three different procedures that can be used as measures of interrater reliability were compared: (1) intraclass correlation coefficient (ICC), (2) Cohen’s kappa statistic, and (3) DIF statistic obtained from Poly-SIBTEST. The results of this investigation indicated that DIF procedures appear to be a promising alternative to assess the interrater reliability of constructed response items, or other polytomous types of items, such as rating scales. Furthermore, using DIF to assess interrater reliability does not require a fully crossed design and allows one to determine if a rater is either more severe, or more lenient, in their scoring of each individual polytomous item on a test or rating scale.


2017 ◽  
Vol 5 (1) ◽  
pp. 59-68 ◽  
Author(s):  
Pauli Olavi Rintala ◽  
Arja Kaarina Sääkslahti ◽  
Susanna Iivonen

This study examined the intrarater and interrater reliability of the Test of Gross Motor Development—3rd Edition (TGMD-3). Participants were 60 Finnish children aged between 3 and 9 years, divided into three separate samples of 20. Two samples of 20 were used to examine the intrarater reliability of two different assessors, and the third sample of 20 was used to establish interrater reliability. Children’s TGMD-3 performances were video-recorded and later assessed using an intraclass correlation coefficient, a kappa statistic, and a percent agreement calculation. The intrarater reliability of the locomotor subtest, ball skills subtest, and gross motor total score ranged from 0.69 to 0.77, and percent agreement ranged from 87 to 91%. The interrater reliability of the locomotor subtest, ball skills subtest, and gross motor total score ranged from 0.56 to 0.64. Percent agreement of 83% was observed for locomotor skills, ball skills, and total skills, respectively. Hop, horizontal jump, and two-hand strike assessments showed the most difference between the assessors. These results show acceptable reliability for the TGMD-3 to analyze children’s gross motor skills.


Author(s):  
Emily Q Zhang ◽  
Vivian SY Leung ◽  
Daniel SJ Pang

Rodent grimace scales facilitate assessment of ongoing pain. Reported rater training using these scales varies considerably and may contribute to the observed variability in interrater reliability. This study evaluated the effect of training on interrater reliability with the Rat Grimace Scale (RGS). Two training sets (42 and 150 images) were prepared from acute pain models. Four trainee raters progressed through 2 rounds of training, scoring 42 images (set 1) followed by 150 images (set 2a). After each round, trainees reviewed the RGS and any problematic images with an experienced rater. The 150 images were then rescored (set 2b). Four years later, trainees rescored the 150 images (set 2c). A second group of raters (no-training group) scored the same image sets without review with the experienced rater. Inter- and intrarater reliability were evaluated by using the intraclass correlation coefficient (ICC), and ICC values were compared by using the Feldt test. In the trainee group, interrater reliability increased from moderate to very good between sets 1 and 2b and increased between sets 2a and 2b. Action units with the highest and lowest ICC at set 2b were orbital tightening and whiskers, respectively. In comparison to an experienced rater, the ICC for all trainees improved, ranging from 0.88 to 0.91 at set 2b. Four years later, very good interrater reliability was retained, and intrarater reliability was good or very good). The interrater reliability of the no-training group was moderate and did not improve from set 1 to set 2b. Training improved interrater reliability, with an associated reduction in 95%CI. In addition, training improved interrater reliability with an experienced rater, and performance was retained.


Dermatology ◽  
2019 ◽  
Vol 236 (1) ◽  
pp. 8-14 ◽  
Author(s):  
Katarzyna Włodarek ◽  
Aleksandra Stefaniak ◽  
Łukasz Matusiak ◽  
Jacek C. Szepietowski

A wide variety of assessment tools have been proposed for hidradenitis suppurativa (HS) until now, but none of them meets the criteria for an ideal score. Because there is no gold standard scoring system, the choice of the measure instrument depends on the purpose of use and even on the physician’s experience in the subject of HS. The aim of this study was to assess the intrarater and interrater reliability of 6 scoring systems commonly used for grading severity of HS: the Hurley Staging System, the Refined Hurley Staging, the Hidradenitis Suppurativa Severity Score System (IHS4), the Hidradenitis Suppurativa Severity Index (HSSI), the Sartorius Hidradenitis Suppurativa Score and the Hidradenitis Suppurativa Physician’s Global Assessment Scale (HS-PGA). On the scoring day, 9 HS patients underwent a physical examination and disease severity assessment by a group of 16 dermatology residents using all evaluated instruments. Then, intrarater reliability was calculated using intraclass correlation coefficient (ICC), and interrater variability was evaluated using the coefficient of variation (CV). In all 6 scorings the ICCs were >0.75, indicating high intrarater reliability of all presented scales. The study has also demonstrated moderate agreement between raters in most of the evaluated measure instruments. The most reproducible methods, according to CVs, seem to be the Hurley staging, IHS4, and HSSI. None of the 6 evaluated scoring systems showed a significant advantage over the other when comparing ICCs, and all the instruments seem to be very reliable methods. The interrater reliability was usually good, but the most repeatable results between researchers were obtained for the easiest scales, including Hurley scoring, IHS4 and HSSI.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Jiali Lou ◽  
Yongliang Jiang ◽  
Hantong Hu ◽  
Xiaoyu Li ◽  
Yajun Zhang ◽  
...  

The objective of this study was to determine the intrarater and interrater reliabilities of infrared image analysis of forearm acupoints before and after moxibustion. In this work, infrared images of acupoints in the forearm of 20 volunteers (M/F, 10/10) were collected prior to and after moxibustion by infrared thermography (IRT). Two trained raters performed the analysis of infrared images in two different periods at a one-week interval. The intraclass correlation coefficient (ICC) was calculated to determine the intrarater and interrater reliabilities. With regard to the intrarater reliability, ICC values were between 0.758 and 0.994 (substantial to excellent). For the interrater reliability, ICC values ranged from 0.707 to 0.964 (moderate to excellent). Given that the intrarater and interrater reliability levels show excellent concordance, IRT could be a reliable tool to monitor the temperature change of forearm acupoints induced by moxibustion.


2019 ◽  
Vol 5 (1) ◽  
pp. e000541 ◽  
Author(s):  
John Ressman ◽  
Wilhelmus Johannes Andreas Grooten ◽  
Eva Rasmussen Barr

Single leg squat (SLS) is a common tool used in clinical examination to set and evaluate rehabilitation goals, but also to assess lower extremity function in active people.ObjectivesTo conduct a review and meta-analysis on the inter-rater and intrarater reliability of the SLS, including the lateral step-down (LSD) and forward step-down (FSD) tests.DesignReview with meta-analysis.Data sourcesCINAHL, Cochrane Library, Embase, Medline (OVID) and Web of Science was searched up until December 2018.Eligibility criteriaStudies were eligible for inclusion if they were methodological studies which assessed the inter-rater and/or intrarater reliability of the SLS, FSD and LSD through observation of movement quality.ResultsThirty-one studies were included. The reliability varied largely between studies (inter-rater: kappa/intraclass correlation coefficients (ICC) = 0.00–0.95; intrarater: kappa/ICC = 0.13–1.00), but most of the studies reached ‘moderate’ measures of agreement. The pooled results of ICC/kappa showed a ‘moderate’ agreement for inter-rater reliability, 0.58 (95% CI 0.50 to 0.65), and a ‘substantial’ agreement for intrarater reliability, 0.68 (95% CI 0.60 to 0.74). Subgroup analyses showed a higher pooled agreement for inter-rater reliability of ≤3-point rating scales while no difference was found for different numbers of segmental assessments.ConclusionOur findings indicate that the SLS test including the FSD and LSD tests can be suitable for clinical use regardless of number of observed segments and particularly with a ≤3-point rating scale. Since most of the included studies were affected with some form of methodological bias, our findings must be interpreted with caution.PROSPERO registration numberCRD42018077822.


2002 ◽  
Vol 96 (5) ◽  
pp. 1129-1139 ◽  
Author(s):  
Jason Slagle ◽  
Matthew B. Weinger ◽  
My-Than T. Dinh ◽  
Vanessa V. Brumer ◽  
Kevin Williams

Background Task analysis may be useful for assessing how anesthesiologists alter their behavior in response to different clinical situations. In this study, the authors examined the intraobserver and interobserver reliability of an established task analysis methodology. Methods During 20 routine anesthetic procedures, a trained observer sat in the operating room and categorized in real-time the anesthetist's activities into 38 task categories. Two weeks later, the same observer performed task analysis from videotapes obtained intraoperatively. A different observer performed task analysis from the videotapes on two separate occasions. Data were analyzed for percent of time spent on each task category, average task duration, and number of task occurrences. Rater reliability and agreement were assessed using intraclass correlation coefficients. Results Intrarater reliability was generally good for categorization of percent time on task and task occurrence (mean intraclass correlation coefficients of 0.84-0.97). There was a comparably high concordance between real-time and video analyses. Interrater reliability was generally good for percent time and task occurrence measurements. However, the interrater reliability of the task duration metric was unsatisfactory, primarily because of the technique used to capture multitasking. Conclusions A task analysis technique used in anesthesia research for several decades showed good intrarater reliability. Off-line analysis of videotapes is a viable alternative to real-time data collection. Acceptable interrater reliability requires the use of strict task definitions, sophisticated software, and rigorous observer training. New techniques must be developed to more accurately capture multitasking. Substantial effort is required to conduct task analyses that will have sufficient reliability for purposes of research or clinical evaluation.


Sign in / Sign up

Export Citation Format

Share Document