scholarly journals Development and Preliminary Validation of Standardized Regression-Based Change Scores as Measures of Transitional Cognitive Decline

2020 ◽  
Vol 35 (7) ◽  
pp. 1168-1181 ◽  
Author(s):  
Andrew M Kiselica ◽  
Alyssa N Kaser ◽  
Troy A Webber ◽  
Brent J Small ◽  
Jared F Benge

Abstract Objective An increasing focus in Alzheimer’s disease and aging research is to identify transitional cognitive decline. One means of indexing change over time in serial cognitive evaluations is to calculate standardized regression-based (SRB) change indices. This paper includes the development and preliminary validation of SRB indices for the Uniform Data Set 3.0 Neuropsychological Battery, as well as base rate data to aid in their interpretation. Method The sample included 1,341 cognitively intact older adults with serial assessments over 0.5–2 years in the National Alzheimer’s Coordinating Center Database. SRB change scores were calculated in half of the sample and then validated in the other half of the sample. Base rates of SRB decline were evaluated at z-score cut-points, corresponding to two-tailed p-values of .20 (z = −1.282), .10 (z = −1.645), and .05 (z = −1.96). We examined convergent associations of SRB indices for each cognitive measure with each other as well as concurrent associations of SRB indices with clinical dementia rating sum of box scores (CDR-SB). Results SRB equations were able to significantly predict the selected cognitive variables. The base rate of at least one significant SRB decline across the entire battery ranged from 26.70% to 58.10%. SRB indices for cognitive measures demonstrated theoretically expected significant positive associations with each other. Additionally, CDR-SB impairment was associated with an increasing number of significantly declined test scores. Conclusions This paper provides preliminary validation of SRB indices in a large sample, and we present a user-friendly tool for calculating SRB values.

2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Christina N. Lessov-Schlaggar ◽  
Olga L. del Rosario ◽  
John C. Morris ◽  
Beau M. Ances ◽  
Bradley L. Schlaggar ◽  
...  

Abstract Background Adults with Down syndrome (DS) are at increased risk for Alzheimer disease dementia, and there is a pressing need for the development of assessment instruments that differentiate chronic cognitive impairment, acute neuropsychiatric symptomatology, and dementia in this population of patients. Methods We adapted a widely used instrument, the Clinical Dementia Rating (CDR) Scale, which is a component of the Uniform Data Set used by all federally funded Alzheimer Disease Centers for use in adults with DS, and tested the instrument among 34 DS patients recruited from the community. The participants were assessed using two versions of the modified CDR—a caregiver questionnaire and an in-person interview involving both the caregiver and the DS adult. Assessment also included the Dementia Scale for Down Syndrome (DSDS) and the Raven’s Progressive Matrices to estimate IQ. Results Both modified questionnaire and interview instruments captured a range of cognitive impairments, a majority of which were found to be chronic when accounting for premorbid function. Two individuals in the sample were strongly suspected to have early dementia, both of whom had elevated scores on the modified CDR instruments. Among individuals rated as having no dementia based on the DSDS, about half showed subthreshold impairments on the modified CDR instruments; there was substantial agreement between caregiver questionnaire screening and in-person interview of caregivers and DS adults. Conclusions The modified questionnaire and interview instruments capture a range of impairment in DS adults, including subthreshold symptomatology, and the instruments provide complementary information relevant to the ascertainment of dementia in DS. Decline was seen across all cognitive domains and was generally positively related to age and negatively related to IQ. Most importantly, adjusting instrument scores for chronic, premorbid impairment drastically shifted the distribution toward lower (no impairment) scores.


mSystems ◽  
2018 ◽  
Vol 3 (3) ◽  
Author(s):  
Gabriel A. Al-Ghalith ◽  
Benjamin Hillmann ◽  
Kaiwei Ang ◽  
Robin Shields-Cutler ◽  
Dan Knights

ABSTRACT Next-generation sequencing technology is of great importance for many biological disciplines; however, due to technical and biological limitations, the short DNA sequences produced by modern sequencers require numerous quality control (QC) measures to reduce errors, remove technical contaminants, or merge paired-end reads together into longer or higher-quality contigs. Many tools for each step exist, but choosing the appropriate methods and usage parameters can be challenging because the parameterization of each step depends on the particularities of the sequencing technology used, the type of samples being analyzed, and the stochasticity of the instrumentation and sample preparation. Furthermore, end users may not know all of the relevant information about how their data were generated, such as the expected overlap for paired-end sequences or type of adaptors used to make informed choices. This increasing complexity and nuance demand a pipeline that combines existing steps together in a user-friendly way and, when possible, learns reasonable quality parameters from the data automatically. We propose a user-friendly quality control pipeline called SHI7 (canonically pronounced “shizen”), which aims to simplify quality control of short-read data for the end user by predicting presence and/or type of common sequencing adaptors, what quality scores to trim, whether the data set is shotgun or amplicon sequencing, whether reads are paired end or single end, and whether pairs are stitchable, including the expected amount of pair overlap. We hope that SHI7 will make it easier for all researchers, expert and novice alike, to follow reasonable practices for short-read data quality control. IMPORTANCE Quality control of high-throughput DNA sequencing data is an important but sometimes laborious task requiring background knowledge of the sequencing protocol used (such as adaptor type, sequencing technology, insert size/stitchability, paired-endedness, etc.). Quality control protocols typically require applying this background knowledge to selecting and executing numerous quality control steps with the appropriate parameters, which is especially difficult when working with public data or data from collaborators who use different protocols. We have created a streamlined quality control pipeline intended to substantially simplify the process of DNA quality control from raw machine output files to actionable sequence data. In contrast to other methods, our proposed pipeline is easy to install and use and attempts to learn the necessary parameters from the data automatically with a single command.


Author(s):  
Tomas Nikolai ◽  
Filip Děchtěrenko ◽  
Beril Yaffe ◽  
Hana Georgi ◽  
Miloslav Kopecek ◽  
...  

2021 ◽  
Author(s):  
Magnus Dehli Vigeland ◽  
Thore Egeland

Abstract We address computational and statistical aspects of DNA-based identification of victims in the aftermath of disasters. Current methods and software for such identification typically consider each victim individually, leading to suboptimal power of identification and potential inconsistencies in the statistical summary of the evidence. We resolve these problems by performing joint identification of all victims, using the complete genetic data set. Individual identification probabilities, conditional on all available information, are derived from the joint solution in the form of posterior pairing probabilities. A closed formula is obtained for the a priori number of possible joint solutions to a given DVI problem. This number increases quickly with the number of victims and missing persons, posing computational challenges for brute force approaches. We address this complexity with a preparatory sequential step aiming to reduce the search space. The examples show that realistic cases are handled efficiently. User-friendly implementations of all methods are provided in the R package dvir, freely available on all platforms.


2020 ◽  
Vol 636 ◽  
pp. A74 ◽  
Author(s):  
Trifon Trifonov ◽  
Lev Tal-Or ◽  
Mathias Zechmeister ◽  
Adrian Kaminski ◽  
Shay Zucker ◽  
...  

Context. The High Accuracy Radial velocity Planet Searcher (HARPS) spectrograph has been mounted since 2003 at the ESO 3.6 m telescope in La Silla and provides state-of-the-art stellar radial velocity (RV) measurements with a precision down to ∼1 m s−1. The spectra are extracted with a dedicated data-reduction software (DRS), and the RVs are computed by cross-correlating with a numerical mask. Aims. This study has three main aims: (i) Create easy access to the public HARPS RV data set. (ii) Apply the new public SpEctrum Radial Velocity AnaLyser (SERVAL) pipeline to the spectra, and produce a more precise RV data set. (iii) Determine whether the precision of the RVs can be further improved by correcting for small nightly systematic effects. Methods. For each star observed with HARPS, we downloaded the publicly available spectra from the ESO archive and recomputed the RVs with SERVAL. This was based on fitting each observed spectrum with a high signal-to-noise ratio template created by coadding all the available spectra of that star. We then computed nightly zero-points (NZPs) by averaging the RVs of quiet stars. Results. By analyzing the RVs of the most RV-quiet stars, whose RV scatter is < 5 m s−1, we find that SERVAL RVs are on average more precise than DRS RVs by a few percent. By investigating the NZP time series, we find three significant systematic effects whose magnitude is independent of the software that is used to derive the RV: (i) stochastic variations with a magnitude of ∼1 m s−1; (ii) long-term variations, with a magnitude of ∼1 m s−1 and a typical timescale of a few weeks; and (iii) 20–30 NZPs that significantly deviate by a few m s−1. In addition, we find small (≲1 m s−1) but significant intra-night drifts in DRS RVs before the 2015 intervention, and in SERVAL RVs after it. We confirm that the fibre exchange in 2015 caused a discontinuous RV jump that strongly depends on the spectral type of the observed star: from ∼14 m s−1 for late F-type stars to ∼ − 3 m s−1 for M dwarfs. The combined effect of extracting the RVs with SERVAL and correcting them for the systematics we find is an improved average RV precision: an improvement of ∼5% for spectra taken before the 2015 intervention, and an improvement of ∼15% for spectra taken after it. To demonstrate the quality of the new RV data set, we present an updated orbital solution of the GJ 253 two-planet system. Conclusions. Our NZP-corrected SERVAL RVs can be retrieved from a user-friendly public database. It provides more than 212 000 RVs for about 3000 stars along with much auxiliary information, such as the NZP corrections, various activity indices, and DRS-CCF products.


2017 ◽  
Vol 29 (10) ◽  
pp. 1735-1741 ◽  
Author(s):  
Fabiano Moulin de Moraes ◽  
Paulo Henrique Ferreira Bertolucci

ABSTRACTBackground:Assigning a diagnosis to a patient with dementia is important for the present treatment of the patient and caregivers, and scientific research. Nowadays, the dementia diagnostic criteria are based on clinical information regarding medical, history, physical examination, neuropsychological tests, and supplementary exams and, therefore, subject to variability through time.Methods:A retrospective observational study to evaluate variables related to clinical diagnostic stability in dementia syndromes in at least one year follow up. From a sample of 432 patients, from a single university center, data were collected regarding sociodemographic aspects, Clinical Dementia Rating, physical examination, neuropsychological tests, and supplementary exams including a depression triage scale.Results:From this sample, 113 (26.6%) patients have their diagnosis changed, most of them adding a vascular component to initial diagnosis or depression as comorbidity or main disease. Our findings show that many factors influence the diagnostic stability including the presence of symmetric Parkinsonism, initial diagnosis of vascular dementia, presence of diabetes and hypertension, the presence of long term memory deficit in the neuropsychological evaluation, and normal neuroimaging. We discuss our findings with previous findings in the literature.Conclusion:Every step of the clinical diagnosis including history, vascular comorbidities and depression, physical examination, neuropsychological battery, and neuroimaging were relevant to diagnosis accuracy.


2019 ◽  
Vol 35 (1) ◽  
pp. 75-89 ◽  
Author(s):  
Paulina V Devora ◽  
Samantha Beevers ◽  
Andrew M Kiselica ◽  
Jared F Benge

Abstract Objective The Uniform Data Set 3.0 (UDS 3.0) neuropsychological battery is a recently published battery intended for clinical research with older adult populations. While normative data for the core measures has been published, several additional discrepancy and derived scores can also be calculated. We present normative data for Trail Making Test (TMT) A & B discrepancy and ratio scores, semantic and phonemic fluency discrepancy scores, Craft Story percent retention score, Benson Figure percent retention score, difference between verbal and visual percent retention, and an error index. Method Cross-Sectional data from 1803 English speaking, cognitively normal control participants were obtained from the NACC central data repository. Results Descriptive information for derived indices is presented. Demographic variables, most commonly age, demonstrated small but significant associations with the measures. Regression values were used to create a normative calculator, made available in a downloadable supplement. Statistically abnormal values (i.e., raw scores corresponding to the 5th, 10th, 90th, and 95th percentiles) were calculated to assist in practical application of normative findings to individual cases. Preliminary validity of the indices are demonstrated by a case study and group comparisons between a sample of individuals with Alzheimer's (N = 81) and Dementia with Lewy Bodies (DLB; N = 100). Conclusions Clinically useful normative data of such derived indices from the UDS 3.0 neuropsychological battery are presented to help researchers and clinicians interpret these scores, accounting for demographic factors. Preliminary validity data is presented as well along with limitations and future directions.


2009 ◽  
Vol 67 (2b) ◽  
pp. 423-427 ◽  
Author(s):  
Gloria Maria Almeida Souza Tedrus ◽  
Lineu Corrêa Fonseca ◽  
Grace Helena Letro ◽  
Alexandre Souza Bossoni ◽  
Adriana Bastos Samara

The objective of this research was to assess the occurrence of cognitive impairment in 32 individuals (average age: 67.2 years old) with Parkinson' disease (PD). Procedures: clinical-neurological assessment; modified Hoehn and Yahr staging scale (HYS); standard neuropsychological battery of CERAD (Consortium to Establish a Registry for Alzheimer' Disease); Pfeffer questionnaire; and Clinical Dementia Rating. A comparison was made with a control group (CG), consisting of 26 individuals with similar age and educational level but without cognitive impairment. The PD patients showed an inferior performance in the CERAD battery when compared to the CG. Three PD sub-groups were characterised according to cognition: no cognitive impairment - 15 cases; mild cognitive impairment - 10; dementia - 7 cases. There was a significant association between motor disability (HYS) and the occurrence of dementia. Dementia and mild cognitive impairment frequently occur in PD patients and should be investigated in a routine way.


Author(s):  
Naveen K. Bansal ◽  
Mehdi Maadooliat ◽  
Steven J. Schrodi

Abstract We consider a multiple hypotheses problem with directional alternatives in a decision theoretic framework. We obtain an empirical Bayes rule subject to a constraint on mixed directional false discovery rate (mdFDR≤α) under the semiparametric setting where the distribution of the test statistic is parametric, but the prior distribution is nonparametric. We proposed separate priors for the left tail and right tail alternatives as it may be required for many applications. The proposed Bayes rule is compared through simulation against rules proposed by Benjamini and Yekutieli and Efron. We illustrate the proposed methodology for two sets of data from biological experiments: HIV-transfected cell-line mRNA expression data, and a quantitative trait genome-wide SNP data set. We have developed a user-friendly web-based shiny App for the proposed method which is available through URL https://npseb.shinyapps.io/npseb/. The HIV and SNP data can be directly accessed, and the results presented in this paper can be executed.


Sign in / Sign up

Export Citation Format

Share Document