ability estimate
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 1)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Charlotta Nilsen ◽  
Monica E Nelson ◽  
Ross Andel ◽  
Michael Crowe ◽  
Deborah Finkel ◽  
...  

Abstract Objectives We examined associations between job strain and trajectories of change in cognitive functioning (general cognitive ability plus verbal, spatial, memory, and speed domains) before and after retirement. Method Data on indicators of job strain, retirement age, and cognitive factors were available from 307 members of the Swedish Adoption/Twin Study of Aging (SATSA). Participants were followed for up to 27 years (mean=15.4, SD=8.5). Results In growth curve analyses controlling for age, sex, education, depressive symptoms, cardiovascular health, and twinness, greater job strain was associated with worse memory (Estimate=-1.22, p=.007), speed (Estimate=-1.11, p=.012), spatial ability (Estimate=-0.96, p=.043), and general cognitive ability (Estimate=-1.33, p=.002) at retirement. Greater job strain was also associated with less improvement in general cognitive ability before retirement and a somewhat slower decline after retirement. The sex-stratified analyses showed that the smaller gains of general cognitive ability before retirement (Estimate=-1.09, p=.005) were only observed in women. Domain-specific analyses revealed that greater job strain was associated with less improvement in spatial (Estimate=-1.35, p=.010) and verbal (Estimate=-0.64, p=.047) ability before retirement in women, and a slower decline in memory after retirement in women (Estimate=0.85, p=.008) and men (Estimate=1.12, p=.013). Neither pre-retirement nor post-retirement speed was affected by job strain. Discussion Greater job strain may have a negative influence on overall cognitive functioning prior to and at retirement, while interrupting exposure to job strain (post-retirement) may slow the rate of cognitive aging. Reducing level of stress at work should be seen as a potential target for intervention to improve cognitive aging outcomes.


2020 ◽  
Vol 63 (1) ◽  
pp. 163-172
Author(s):  
William D. Hula ◽  
Gerasimos Fergadiotis ◽  
Alexander M. Swiderski ◽  
JoAnn P. Silkes ◽  
Stacey Kellough

Purpose The purpose of this study was to verify the equivalence of 2 alternate test forms with nonoverlapping content generated by an item response theory (IRT)–based computer-adaptive test (CAT). The Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996)was utilized as an item bank in a prospective, independent sample of persons with aphasia. Method Two alternate CAT short forms of the PNT were administered to a sample of 25 persons with aphasia who were at least 6 months postonset and received no treatment for 2 weeks before or during the study. The 1st session included administration of a 30-item PNT-CAT, and the 2nd session, conducted approximately 2 weeks later, included a variable-length PNT-CAT that excluded items administered in the 1st session and terminated when the modeled precision of the ability estimate was equal to or greater than the value obtained in the 1st session. The ability estimates were analyzed in a Bayesian framework. Results The 2 test versions correlated highly ( r = .89) and obtained means and standard deviations that were not credibly different from one another. The correlation and error variance between the 2 test versions were well predicted by the IRT measurement model. Discussion The results suggest that IRT-based CAT alternate forms may be productively used in the assessment of anomia. IRT methods offer advantages for the efficient and sensitive measurement of change over time. Future work should consider the potential impact of differential item functioning due to person factors and intervention-specific effects, as well as expanding the item bank to maximize the clinical utility of the test. Supplemental Material https://doi.org/10.23641/asha.11368040


2020 ◽  
Vol 10 (1) ◽  
pp. 35-63
Author(s):  
T.I. Logvinenko ◽  
O.I. Talantseva ◽  
E.M. Volokhova ◽  
S. Khalaf ◽  
E.L. Grigorenko

The lack of valid and standardized instruments, directed on an assessment of the language domain in adolescents and adults in Russia postulates the urgent necessity of their development. To fi ll this gap, the language battery, ARFA-RUS, was created and applied in a large project investigating the long-term consequences of raring in institutional care settings on human development. In the current study, an Item Response Theory (IRT) approach was used to examine the psychometric properties of the Synonyms Subtest of ARFA-RUS as the fi rst step of validation of the battery. IRT results demonstrated the test is reliable for the low-to-moderate levels of the assessed ability; yet, to capture a wider ability range, more diffi cult items are needed. The ARFA-RUS Synonyms Subtest was less suitable for the postinstitutionalized group of adults; in this group, the latent ability estimate explained a lower percentage of variance in comparison to adults raised in biological families. With regard to item-specifi c analyses, two items demonstrated paradoxical patterns with decreased probability of correct response at increased ability. In addition, one item was eliminated from the fi nal version of the Synonyms Subtest due to its poor item fi t and low discrimination value.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S889-S890
Author(s):  
Nicole Armstrong ◽  
Sarah Tom ◽  
Miguel Arce Renteria ◽  
Kaitlin Casaletto ◽  
Jennifer Weuve ◽  
...  

Abstract Engagement in leisure activities, i.e., intellectual, social, and physical activities, may reduce the risk of incident dementia, yet little is known about the longitudinal, dynamic relationship between overall leisure activity engagement and cognition in older adulthood. Using data from a survey measure of 13 leisure activities, e.g., doing unpaid volunteer work and playing cards, games, or bingo, and a neuropsychological battery collected concurrently over 14 years from 2,259 multi-ethnic participants (mean age of 76.0 years) in the Washington Heights-Inwood Columbia Aging Project, we used a parallel process latent growth curve model of trajectories of both leisure activity engagement and cognitive z-scores (global cognitive performance, language, memory, and visuospatial ability). Estimates were adjusted for baseline age, years of education, sex, race/ethnicity, recruitment year, occupation (unskilled, skilled, and housewife), and baseline income. More baseline activity engagement (range, 0-13, higher indicating more engagement) was associated with higher baseline cognitive performance, i.e., global cognitive performance (estimate=0.129, standard error, SE=0.017, p<0.001), language (estimate=0.146, SE=0.020, p<0.001), memory (estimate=0.141, SE=0.025, p<0.001), and visuospatial ability (estimate=0.111, SE=0.020, p<0.001). Decline in leisure activity engagement were associated with decline in global cognitive performance (estimate=0.002, SE=0.000, p<0.001), language (estimate=0.002, SE=0.000, p<0.001), memory (estimate=0.002, SE=0.001, p<0.001), and visuospatial ability (estimate=0.001, SE=0.000, p=0.001). While both level and change in overall leisure activity engagement and cognitive performance were correlated, level of one did not predict change in the other. Similar relationships were found when examining leisure activity categories. This suggests a dynamic, bidirectional relationship between leisure activity engagement and cognitive performance among older adults.


2019 ◽  
Vol 79 (6) ◽  
pp. 1103-1132
Author(s):  
Pascal Jordan ◽  
Martin Spiess

Factor loadings and item discrimination parameters play a key role in scale construction. A multitude of heuristics regarding their interpretation are hardwired into practice—for example, neglecting low loadings and assigning items to exactly one scale. We challenge the common sense interpretation of these parameters by providing counterexamples and general results which altogether cast doubt on our understanding of these parameters. In particular, we highlight the counterintuitive way in which the best prediction of a test taker’s latent ability depends on the factor loadings. As a consequence, we emphasize that practitioners need to shift their focus from interpreting item discrimination parameters by their relative loading to an interpretation which incorporates the structure of the model-based latent ability estimate.


Author(s):  
Tetsuo Kimura

Computer adaptive testing (CAT) is a kind of tailored testing, in that it is a form of computer-based testing that is adaptive to each test-taker’s ability level. In this review, the impacts of CAT are discussed from different perspectives in order to illustrate crucial points to keep in mind during the development and implementation of CAT. Test developers and psychometricians often emphasize the efficiency and accuracy of CAT in comparison to traditional linear tests. However, many test-takers report feeling discouraged after taking CATs, and this feeling can reduce learning self-efficacy and motivation. A trade-off must be made between the psychological experiences of test-takers and measurement efficiency. From the perspective of educators and subject matter experts, nonstatistical specifications, such as content coverage, content balance, and form length are major concerns. Thus, accreditation bodies may be faced with a discrepancy between the perspectives of psychometricians and those of subject matter experts. In order to improve test-takers’ impressions of CAT, the author proposes increasing the target probability of answering correctly in the item selection algorithm even if doing so consequently decreases measurement efficiency. Two different methods, CAT with a shadow test approach and computerized multistage testing, have been developed in order to ensure the satisfaction of subject matter experts. In the shadow test approach, a full-length test is assembled that meets the constraints and provides maximum information at the current ability estimate, while computerized multistage testing gives subject matter experts an opportunity to review all test forms prior to administration.


2014 ◽  
Vol 114 (1) ◽  
pp. 104-125 ◽  
Author(s):  
Hung-Yu Huang

This study compares three methods of detecting differential item functioning (DIF), the equal mean difficulty (EMD), all-other-item (AOI), and constant item (CI) methods, in terms of estimation bias and rank order change of ability estimates using a series of simulations and two empirical examples. The CI method generated accurate DIF parameter estimates, whereas the EMD and AOI methods produced biased estimates. Moreover, as the percentage of DIF items in a test increased, the superiority of the CI method over the EMD and AOI methods became more apparent. The superiority of the CI method is independent of the sample size, test length, and item type (dichotomous or polytomous). Two empirical examples, a mathematics test and a hostility questionnaire, demonstrated that these three methods yielded inconsistent DIF detections and produced different ability estimate rankings.


Sign in / Sign up

Export Citation Format

Share Document