scholarly journals Scalable Bayesian inference for high-dimensional neural receptive fields

2017 ◽  
Author(s):  
Mikio C. Aoi ◽  
Jonathan W. Pillow

AbstractWe examine the problem of rapidly and efficiently estimating a neuron’s linear receptive field (RF) from responses to high-dimensional stimuli. This problem poses important statistical and computational challenges. Statistical challenges arise from the need for strong regularization when using correlated stimuli in high-dimensional parameter spaces, while computational challenges arise from extensive time and memory costs associated with evidence-optimization and inference in high-dimensional settings. Here we focus on novel methods for scaling up automatic smoothness determination (ASD), an empirical Bayesian method for RF estimation, to high-dimensional settings. First, we show that using a zero-padded Fourier domain representation and a “coarse-to-fine” evidence optimization strategy gives substantial improvements in speed and memory, while maintaining exact numerical accuracy. We then introduce a suite of scalable approximate methods that exploit Kronecker and Toeplitz structure in the stimulus autocovariance, which can be related to the method of expected log-likelihoods [1]. When applied together, these methods reduce the cost of estimating an RF with tensor order D and d coefficients per tensor dimension from O(d3D) time and O(d2D) space for standard ASD to O(Dd log d) time and O(Dd) space. We show that evidence optimization for a linear RF with 160K coefficients using 5K samples of data can be carried out on a laptop in < 2s.

Author(s):  
Jyun-You Chiang ◽  
Y. L. Lio ◽  
Tzong-Ru Tsai

To reach an optimal acceptance sampling decision for products, whose lifetimes are Burr type XII distribution, sampling plans are developed with a rebate warranty policy based on truncated censored data. The smallest sample size and acceptance number are determined to minimize the expected total cost, which consists of the test cost, experimental time cost, the cost of lot acceptance or rejection, and the warranty cost. A new method, which combines a simple empirical Bayesian method and the genetic algorithm (GA) method, named the EB-GA method, is proposed to estimate the unknown distribution parameter and hyper-parameters. The parameters of the GA are determined through using an optimal Taguchi design procedure to reduce the subjectivity of parameter determination. An algorithm is presented to implement the EB-GA method. The application of the proposed method is illustrated by an example. Monte Carlo simulation results show that the EB-GA method works well for parameter estimation in terms of small bias and mean square error.


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Jiuwen Cao ◽  
Zhiping Lin

Extreme learning machine (ELM) has been developed for single hidden layer feedforward neural networks (SLFNs). In ELM algorithm, the connections between the input layer and the hidden neurons are randomly assigned and remain unchanged during the learning process. The output connections are then tuned via minimizing the cost function through a linear system. The computational burden of ELM has been significantly reduced as the only cost is solving a linear system. The low computational complexity attracted a great deal of attention from the research community, especially for high dimensional and large data applications. This paper provides an up-to-date survey on the recent developments of ELM and its applications in high dimensional and large data. Comprehensive reviews on image processing, video processing, medical signal processing, and other popular large data applications with ELM are presented in the paper.


2013 ◽  
Vol 7 (1) ◽  
pp. 53 ◽  
Author(s):  
Cihan Oguz ◽  
Teeraphan Laomettachit ◽  
Katherine C Chen ◽  
Layne T Watson ◽  
William T Baumann ◽  
...  

2018 ◽  
Vol 346 (7) ◽  
pp. 524-531 ◽  
Author(s):  
Charles Paillet ◽  
David Néron ◽  
Pierre Ladevèze

2021 ◽  
Author(s):  
Kevin J. Wischnewski ◽  
Simon B. Eickhoff ◽  
Viktor K. Jirsa ◽  
Oleksandr V. Popovych

Abstract Simulating the resting-state brain dynamics via mathematical whole-brain models requires an optimal selection of parameters, which determine the model’s capability to replicate empirical data. Since the parameter optimization via a grid search (GS) becomes unfeasible for high-dimensional models, we evaluate several alternative approaches to maximize the correspondence between simulated and empirical functional connectivity. A dense GS serves as a benchmark to assess the performance of four optimization schemes: Nelder-Mead Algorithm (NMA), Particle Swarm Optimization (PSO), Covariance Matrix Adaptation Evolution Strategy (CMAES) and Bayesian Optimization (BO). To compare them, we employ an ensemble of coupled phase oscillators built upon individual empirical structural connectivity of 105 healthy subjects. We determine optimal model parameters from two- and three-dimensional parameter spaces and show that the overall fitting quality of the tested methods can compete with the GS. There are, however, marked differences in the required computational resources and stability properties, which we also investigate before proposing CMAES and BO as efficient alternatives to a high-dimensional GS. For the three-dimensional case, these methods generated similar results as the GS, but within less than 6% of the computation time. Our results contribute to an efficient validation of models for personalized simulations of brain dynamics.


Sign in / Sign up

Export Citation Format

Share Document