scholarly journals Stable between-subject statistical inference from unstable within-subject functional connectivity estimates

2018 ◽  
Author(s):  
Diego Vidaurre ◽  
Mark W. Woolrich ◽  
Anderson M. Winkler ◽  
Theodoros Karapanagiotidis ◽  
Jonathan Smallwood ◽  
...  

AbstractSpatial or temporal aspects of neural organisation are known to be important indices of how cognition is organised. However, measurements and estimations are often noisy and many of the algorithms used are probabilistic, which in combination have been argued to limit studies exploring the neural basis of specific aspects of cognition. Focusing on static and dynamic functional connectivity estimations, we propose to leverage this variability to improve statistical efficiency in relating these estimations to behaviour. To achieve this goal, we use a procedure based on permutation testing that provides a way of combining the results from many individual tests that refer to the same hypothesis. This is needed when testing a measure whose value is obtained from a noisy process, which can be repeated multiple times, referred to as replications. Focusing on functional connectivity, this noisy process can be: (i) computational, e.g. when using an approximate inference algorithm for which different runs can produce different results or (ii) observational, if we have the capacity to acquire data multiple times, and the different acquired data sets can be considered noisy examples of some underlying truth. In both cases, we are not interested in the individual replications but on the unobserved process generating each replication. In this note, we show how results can be combined instead of choosing just one of the estimated models. Using both simulations and real data, we show the benefits of this approach in practice.

2021 ◽  
Author(s):  
Xin Xiong ◽  
Ivor Cribben

To estimate dynamic functional connectivity for functional magnetic resonance imaging (fMRI) data, two approaches have dominated: sliding window and change point methods. While computationally feasible, the sliding window approach has several limitations. In addition, the existing change point methods assume a Gaussian distribution for and linear dependencies between the fMRI time series. In this work, we introduce a new methodology called Vine Copula Change Point (VCCP) to estimate change points in the functional connectivity network structure between brain regions. It uses vine copulas, various state-of-the-art segmentation methods to identify multiple change points, and a likelihood ratio test or the stationary bootstrap for inference. The vine copulas allow for various forms of dependence between brain regions including tail, symmetric and asymmetric dependence, which has not been explored before in the analysis of neuroimaging data. We apply VCCP to various simulation data sets and to two fMRI data sets: a reading task and an anxiety inducing experiment. In particular, for the former data set, we illustrate the complexity of textual changes during the reading of Chapter 9 in Harry Potter and the Sorcerer's Stone and find that change points across subjects are related to changes in more than one type of textual attributes. Further, the graphs created by the vine copulas indicate the importance of working beyond Gaussianity and linear dependence. Finally, the R package vccp implementing the methodology from the paper is available from CRAN.


2015 ◽  
Vol 26 (4) ◽  
pp. 1867-1880
Author(s):  
Ilmari Ahonen ◽  
Denis Larocque ◽  
Jaakko Nevalainen

Outlier detection covers the wide range of methods aiming at identifying observations that are considered unusual. Novelty detection, on the other hand, seeks observations among newly generated test data that are exceptional compared with previously observed training data. In many applications, the general existence of novelty is of more interest than identifying the individual novel observations. For instance, in high-throughput cancer treatment screening experiments, it is meaningful to test whether any new treatment effects are seen compared with existing compounds. Here, we present hypothesis tests for such global level novelty. The problem is approached through a set of very general assumptions, making it innovative in relation to the current literature. We introduce test statistics capable of detecting novelty. They operate on local neighborhoods and their null distribution is obtained by the permutation principle. We show that they are valid and able to find different types of novelty, e.g. location and scale alternatives. The performance of the methods is assessed with simulations and with applications to real data sets.


2020 ◽  
Vol 9 (2) ◽  
pp. 492-501
Author(s):  
S Balaswamy ◽  
R V. Vardhan ◽  
G Sameera

In a multivariate setup, the classification techniques have its significance in identifying the exact status of the individual/observer along with accuracy of the test. One such classification technique is the Multivariate Receiver Operating Characteristic (MROC) Curve. This technique is well known to explain the extent of correct classification with the curve above the random classifier (guessing line) when it satisfies all of its properties especially the property of increasing likelihood ratio function. However, there are circumstances where the curve violates the above property. Such a curve is termed as improper curve. This paper demonstrates the methodology of improperness of the MROC Curve and ways of measuring it. The methodology is explained using real data sets.


2020 ◽  
Vol 30 (12) ◽  
pp. 6224-6237
Author(s):  
Liqin Zhou ◽  
Zonglei Zhen ◽  
Jia Liu ◽  
Ke Zhou

Abstract The attentional blink (AB) has been central in characterizing the limit of temporal attention and consciousness. The neural mechanism of the AB is still in hot debate. With a large sample size, we combined multiple behavioral tests, multimodal MRI measures, and transcranial magnetic stimulation to investigate the neural basis underlying the individual differences in the AB. We found that AB magnitude correlated with the executive control functioning of working memory (WM) in behavior, which was fully mediated by T1 performance. Structural variations in the right temporoparietal junction (rTPJ) and its intrinsic functional connectivity with the left inferior frontal junction (lIFJ) accounted for the individual differences in the AB, which was moderated by the executive control of working memory. Disrupting the function of the lIFJ attenuated the AB deficit. Our findings clarified the neural correlates of the individual differences in the AB and elucidated its relationship with the consolidation-driven inhibitory control process.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Šárka Hudecová ◽  
Marie Hušková ◽  
Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 202
Author(s):  
Louai Alarabi ◽  
Saleh Basalamah ◽  
Abdeltawab Hendawi ◽  
Mohammed Abdalla

The rapid spread of infectious diseases is a major public health problem. Recent developments in fighting these diseases have heightened the need for a contact tracing process. Contact tracing can be considered an ideal method for controlling the transmission of infectious diseases. The result of the contact tracing process is performing diagnostic tests, treating for suspected cases or self-isolation, and then treating for infected persons; this eventually results in limiting the spread of diseases. This paper proposes a technique named TraceAll that traces all contacts exposed to the infected patient and produces a list of these contacts to be considered potentially infected patients. Initially, it considers the infected patient as the querying user and starts to fetch the contacts exposed to him. Secondly, it obtains all the trajectories that belong to the objects moved nearby the querying user. Next, it investigates these trajectories by considering the social distance and exposure period to identify if these objects have become infected or not. The experimental evaluation of the proposed technique with real data sets illustrates the effectiveness of this solution. Comparative analysis experiments confirm that TraceAll outperforms baseline methods by 40% regarding the efficiency of answering contact tracing queries.


Sign in / Sign up

Export Citation Format

Share Document