scholarly journals A computationally efficient, high‐dimensional multiple changepoint procedure with application to global terrorism incidence

Author(s):  
S. O. Tickle ◽  
I. A. Eckley ◽  
P. Fearnhead
2016 ◽  
Vol 76 (4) ◽  
pp. 512-531 ◽  
Author(s):  
Xiaoguang Feng ◽  
Dermot Hayes

Purpose Portfolio risk in crop insurance due to the systemic nature of crop yield losses has inhibited the development of private crop insurance markets. Government subsidy or reinsurance has therefore been used to support crop insurance programs. The purpose of this paper is to investigate the possibility of converting systemic crop yield risk into “poolable” risk. Specifically, this study examines whether it is possible to remove the co-movement as well as tail dependence of crop yield variables by enlarging the risk pool across different crops and countries. Design/methodology/approach Hierarchical Kendall copula (HKC) models are used to model potential non-linear correlations of the high-dimensional crop yield variables. A Bayesian estimation approach is applied to account for estimation risk in the copula parameters. A synthetic insurance portfolio is used to evaluate the systemic risk and diversification effect. Findings The results indicate that the systemic nature – both positive correlation and lower tail dependence – of crop yield risks can be eliminated by combining crop insurance policies across crops and countries. Originality/value The study applies the HKC in the context of agricultural risks. Compared to other advanced copulas, the HKC achieves both flexibility and parsimony. The flexibility of the HKC makes it appropriate to precisely represent various correlation structures of crop yield risks while the parsimony makes it computationally efficient in modeling high-dimensional correlation structure.


Fractals ◽  
2018 ◽  
Vol 26 (04) ◽  
pp. 1850084 ◽  
Author(s):  
FAJIE WANG ◽  
WEN CHEN ◽  
CHUANZENG ZHANG ◽  
QINGSONG HUA

This study proposes the radial basis function (RBF) based on the Hausdorff fractal distance and then applies it to develop the Kansa method for the solution of the Hausdorff derivative Poisson equations. The Kansa method is a meshless global technique promising for high-dimensional irregular domain problems. It is, however, noted that the shape parameter of the RBFs can have a significant influence on the accuracy and robustness of the numerical solution. Based on the leave-one-out cross-validation algorithm proposed by Rippa, this study presents a new technique to choose the optimal shape parameter of the RBFs with the Hausdorff fractal distance. Numerical experiments show that the Kansa method based on the Hausdorff fractal distance is highly accurate and computationally efficient for the Hausdorff derivative Poisson equations.


Author(s):  
Songhui Zhu ◽  
Pei Yu ◽  
Stacey Jones

Normal form theory is a powerful tool in the study of nonlinear systems, in particular, for complex dynamical behaviors such as stability and bifurcations. However, it has not been widely used in practice due to the lack of efficient computation methods, especially for high dimensional engineering problems. The main difficulty in applying normal form theory is to determine the critical conditions under which the dynamical system undergoes a bifurcation. In this paper a computationally efficient method is presented for determining the critical condition of Hopf bifurcation by calculating the Jacobian matrix and the Hurwitz condition. This method combines numerical and symbolic computation schemes, and can be applied to high dimensional systems. The Lorenz system and the extended Malkus-Robbins dynamo system are used to show the applicability of the method.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 543 ◽  
Author(s):  
Konrad Furmańczyk ◽  
Wojciech Rejchel

In this paper, we consider prediction and variable selection in the misspecified binary classification models under the high-dimensional scenario. We focus on two approaches to classification, which are computationally efficient, but lead to model misspecification. The first one is to apply penalized logistic regression to the classification data, which possibly do not follow the logistic model. The second method is even more radical: we just treat class labels of objects as they were numbers and apply penalized linear regression. In this paper, we investigate thoroughly these two approaches and provide conditions, which guarantee that they are successful in prediction and variable selection. Our results hold even if the number of predictors is much larger than the sample size. The paper is completed by the experimental results.


2020 ◽  
Author(s):  
Saber Meamardoost ◽  
Mahasweta Bhattacharya ◽  
EunJung Hwang ◽  
Takaki Komiyama ◽  
Claudia Mewes ◽  
...  

AbstractThe inference of neuronal connectome from large-scale neuronal activity recordings, such as two-photon Calcium imaging, represents an active area of research in computational neuroscience. In this work, we developed FARCI (Fast and Robust Connectome Inference), a MATLAB package for neuronal connectome inference from high-dimensional two-photon Calcium fluorescence data. We employed partial correlations as a measure of the functional association strength between pairs of neurons to reconstruct a neuronal connectome. We demonstrated using gold standard datasets from the Neural Connectomics Challenge (NCC) that FARCI provides an accurate connectome and its performance is robust to network sizes, missing neurons, and noise levels. Moreover, FARCI is computationally efficient and highly scalable to large networks. In comparison to the best performing algorithm in the NCC, FARCI produces more accurate networks over different network sizes and subsampling, while providing over two orders of magnitude faster computational speed.


2021 ◽  
Author(s):  
Reetika Sarkar ◽  
Sithija Manage ◽  
Xiaoli Gao

Abstract Background: High-dimensional genomic data studies are often found to exhibit strong correlations, which results in instability and inconsistency in the estimates obtained using commonly used regularization approaches including both the Lasso and MCP, and related methods. Result: In this paper, we perform a comparative study of regularization approaches for variable selection under different correlation structures, and propose a two-stage procedure named rPGBS to address the issue of stable variable selection in various strong correlation settings. This approach involves repeatedly running of a two-stage hierarchical approach consisting of a random pseudo-group clustering and bi-level variable selection. Conclusion: Both the simulation studies and high-dimensional genomic data analysis have demonstrated the advantage of the proposed rPGBS method over most commonly used regularization methods. In particular, the rPGBS results in more stable selection of variables across a variety of correlation settings, as compared to recent work addressing variable selection with strong correlations. Moreover, the rPGBS is computationally efficient across various settings.


2016 ◽  
Vol 37 (1) ◽  
Author(s):  
Gintautas Jakimauskas ◽  
Marijus Radavičius ◽  
Jurgis Sušinskas

A simple, data-driven and computationally efficient procedure for testing independence of high-dimensional random vectors is proposed. The procedure is based on interpretation of testing goodness-of-fit as the classification problem, a special sequential partition procedure, elements of sequential testing, resampling and randomization. Monte Carlo simulations are carried out to assess the performance of the procedure.


Author(s):  
Futoshi Futami ◽  
Zhenghang Cui ◽  
Issei Sato ◽  
Masashi Sugiyama

In Bayesian inference, the posterior distributions are difficult to obtain analytically for complex models such as neural networks. Variational inference usually uses a parametric distribution for approximation, from which we can easily draw samples. Recently discrete approximation by particles has attracted attention because of its high expression ability. An example is Stein variational gradient descent (SVGD), which iteratively optimizes particles. Although SVGD has been shown to be computationally efficient empirically, its theoretical properties have not been clarified yet and no finite sample bound of the convergence rate is known. Another example is the Stein points (SP) method, which minimizes kernelized Stein discrepancy directly. Althoughafinitesampleboundisassuredtheoretically, SP is computationally inefficient empirically, especially in high-dimensional problems. In this paper, we propose a novel method named maximum mean discrepancy minimization by the Frank-Wolfe algorithm (MMD-FW), which minimizes MMD in a greedy way by the FW algorithm. Our method is computationally efficient empirically and we show that its finite sample convergence bound is in a linear order in finite dimensions.


Sign in / Sign up

Export Citation Format

Share Document