variational bayesian inference
Recently Published Documents


TOTAL DOCUMENTS

154
(FIVE YEARS 23)

H-INDEX

16
(FIVE YEARS 0)

Author(s):  
Yicheng Zhou ◽  
Zhenzhou Lu ◽  
Yan Shi ◽  
Changcong Zhou ◽  
Wanying Yun

In the time-variant systems, random variables, stochastic processes, and time parameter are regarded as the inputs of time-variant computational model. This results in an even more computationally expensive model what makes the time-variant reliability analysis a challenging task. This paper addresses the problem by presenting an active learning strategy using polynomial chaos expansion (PCE) in an augmented reliability space. We first propose a new algorithm that determines the sparse representation applying statistical threshold to determine the significant terms of the PCE model. This adaptive decision test is integrated into the variational Bayesian method, improving its accuracy and reducing convergence time. The proposed method uses a composite criterion to identify the most significant time instants and the associated training points to enrich the experimental design. By simulations, we compare the performance of the proposed method with respect to other existing time-variant reliability analysis methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Juan Zhao ◽  
Xia Bai ◽  
Tao Shan ◽  
Ran Tao

Compressed sensing can recover sparse signals using a much smaller number of samples than the traditional Nyquist sampling theorem. Block sparse signals (BSS) with nonzero coefficients occurring in clusters arise naturally in many practical scenarios. Utilizing the sparse structure can improve the recovery performance. In this paper, we consider recovering arbitrary BSS with a sparse Bayesian learning framework by inducing correlated Laplacian scale mixture (LSM) prior, which can model the dependence of adjacent elements of the block sparse signal, and then a block sparse Bayesian learning algorithm is proposed via variational Bayesian inference. Moreover, we present a fast version of the proposed recovery algorithm, which does not involve the computation of matrix inversion and has robust recovery performance in the low SNR case. The experimental results with simulated data and ISAR imaging show that the proposed algorithms can efficiently reconstruct BSS and have good antinoise ability in noisy environments.


2021 ◽  
Vol 150 (4) ◽  
pp. A154-A154
Author(s):  
Yongsung Park ◽  
Florian Meyer ◽  
Peter Gerstoft

Automatica ◽  
2021 ◽  
Vol 132 ◽  
pp. 109827
Author(s):  
Xiaoxu Wang ◽  
Chaofeng Li ◽  
Tiancheng Li ◽  
Yan Liang ◽  
Zhengtao Ding ◽  
...  

Author(s):  
Xiaofeng Liu ◽  
Bo Hu ◽  
Linghao Jin ◽  
Xu Han ◽  
Fangxu Xing ◽  
...  

In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training. Considering the inherent conditional and label shifts, we would expect the alignment of p(x|y) and p(y). However, the widely used domain invariant feature learning (IFL) methods relies on aligning the marginal concept shift w.r.t. p(x), which rests on an unrealistic assumption that p(y) is invariant across domains. We thereby propose a novel variational Bayesian inference framework to enforce the conditional distribution alignment w.r.t. p(x|y) via the prior distribution matching in a latent space, which also takes the marginal label shift w.r.t. p(y) into consideration with the posterior alignment. Extensive experiments on various benchmarks demonstrate that our framework is robust to the label shift and the cross-domain accuracy is significantly improved, thereby achieving superior performance over the conventional IFL counterparts.


Sign in / Sign up

Export Citation Format

Share Document