A large deviation inequality forβ-mixing time series and its applications to the functional kernel regression model

2018 ◽  
Vol 133 ◽  
pp. 50-58 ◽  
Author(s):  
Johannes T.N. Krebs
1998 ◽  
Vol 7 (1) ◽  
pp. 57-63 ◽  
Author(s):  
D. A. GRABLE

Often when analysing randomized algorithms, especially parallel or distributed algorithms, one is called upon to show that some function of many independent choices is tightly concentrated about its expected value. For example, the algorithm might colour the vertices of a given graph with two colours and one would wish to show that, with high probability, very nearly half of all edges are monochromatic.The classic result of Chernoff [3] gives such a large deviation result when the function is a sum of independent indicator random variables. The results of Hoeffding [5] and Azuma [2] give similar results for functions which can be expressed as martingales with a bounded difference property. Roughly speaking, this means that each individual choice has a bounded effect on the value of the function. McDiarmid [9] nicely summarized these results and gave a host of applications. Expressed a little differently, his main result is as follows.


2011 ◽  
Vol 48 (1) ◽  
pp. 154-172 ◽  
Author(s):  
Chang-Long Yao ◽  
Ge Chen ◽  
Tian-De Guo

Denote the Palm measure of a homogeneous Poisson process Hλ with two points 0 and x by P0,x. We prove that there exists a constant μ ≥ 1 such that P0,x(D(0, x) / μ||x||2 ∉ (1 − ε, 1 + ε) | 0, x ∈ C∞) exponentially decreases when ||x||2 tends to ∞, where D(0, x) is the graph distance between 0 and x in the infinite component C∞ of the random geometric graph G(Hλ; 1). We derive a large deviation inequality for an asymptotic shape result. Our results have applications in many fields and especially in wireless sensor networks.


1996 ◽  
Vol 3 (11) ◽  
Author(s):  
Devdatt P. Dubhashi ◽  
David A. Grable ◽  
Alessandro Panconesi

We give a distributed randomized algorithm to edge colour a network. Let G be a graph<br />with n nodes and maximum degree Delta. Here we prove:<br /> If Delta = Omega(log^(1+delta) n) for some delta > 0 and lambda > 0 is fixed, the algorithm almost always<br />colours G with (1 + lambda)Delta colours in time O(log n).<br /> If s > 0 is fixed, there exists a positive constant k such that if Delta = omega(log^k n), the algorithm almost always colours G with Delta + Delta / log^s n = (1+o(1))Delta colours in time<br />O(logn + log^s n log log n).<br />By "almost always" we mean that the algorithm may fail, but the failure probability can be<br />made arbitrarily close to 0.<br />The algorithm is based on the nibble method, a probabilistic strategy introduced by<br />Vojtech R¨odl. The analysis makes use of a powerful large deviation inequality for functions<br />of independent random variables.


2011 ◽  
Vol 48 (01) ◽  
pp. 154-172 ◽  
Author(s):  
Chang-Long Yao ◽  
Ge Chen ◽  
Tian-De Guo

Denote the Palm measure of a homogeneous Poisson process H λ with two points 0 and x by P0,x . We prove that there exists a constant μ ≥ 1 such that P0,x (D(0, x) / μ||x||2 ∉ (1 − ε, 1 + ε) | 0, x ∈ C ∞) exponentially decreases when ||x||2 tends to ∞, where D(0, x) is the graph distance between 0 and x in the infinite component C ∞ of the random geometric graph G(H λ; 1). We derive a large deviation inequality for an asymptotic shape result. Our results have applications in many fields and especially in wireless sensor networks.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Jinxin Zhu ◽  
Jun Shao

This paper investigates a joint robust scheme in a secrecy relay network where distributed relays perform cooperative beamforming and a friendly jammer transmits jamming signal to enhance the information security. Specifically, we consider the outage constraint secrecy rate maximization design with imperfect channel state information. Through semidefinite relaxation and one-dimensional search, we propose a two-layer optimization method to solve the nonconvex problem. In addition, the Bernstein-type inequality and large deviation inequality are utilized to convert the probabilistic constraint. Simulation results demonstrate the performance of the proposed design.


2021 ◽  
Vol 11 (14) ◽  
pp. 6594
Author(s):  
Yu-Chia Hsu

The interdisciplinary nature of sports and the presence of various systemic and non-systemic factors introduce challenges in predicting sports match outcomes using a single disciplinary approach. In contrast to previous studies that use sports performance metrics and statistical models, this study is the first to apply a deep learning approach in financial time series modeling to predict sports match outcomes. The proposed approach has two main components: a convolutional neural network (CNN) classifier for implicit pattern recognition and a logistic regression model for match outcome judgment. First, the raw data used in the prediction are derived from the betting market odds and actual scores of each game, which are transformed into sports candlesticks. Second, CNN is used to classify the candlesticks time series on a graphical basis. To this end, the original 1D time series are encoded into 2D matrix images using Gramian angular field and are then fed into the CNN classifier. In this way, the winning probability of each matchup team can be derived based on historically implied behavioral patterns. Third, to further consider the differences between strong and weak teams, the CNN classifier adjusts the probability of winning the match by using the logistic regression model and then makes a final judgment regarding the match outcome. We empirically test this approach using 18,944 National Football League game data spanning 32 years and find that using the individual historical data of each team in the CNN classifier for pattern recognition is better than using the data of all teams. The CNN in conjunction with the logistic regression judgment model outperforms the CNN in conjunction with SVM, Naïve Bayes, Adaboost, J48, and random forest, and its accuracy surpasses that of betting market prediction.


Author(s):  
Yumei Liu ◽  
Ningguo Qiao ◽  
Congcong Zhao ◽  
Jiaojiao Zhuang ◽  
Guangdong Tian

Accurate vibration time series modeling can mine the internal law of data and provide valuable references for reliability assessment. To improve the prediction accuracy, this study proposes a hybrid model – called the AR–SVR–CPSO hybrid model – that combines the auto regression (AR) and support vector regression (SVR) models, with the weights optimized by the chaotic particle swarm optimization (CPSO) algorithm. First, the auto regression model with the difference method is employed to model the vibration time series. Second, the support vector regression model with the phase space reconstruction is constructed for predicting the vibration time series once more. Finally, the predictions of the AR and SVR models are weighted and summed together, with the weights being optimized by the CPSO. In addition, the data collected from the reliability test platform of high-speed train transmission systems and the “NASA prognostics data repository” are used to validate the hybrid model. The experimental results demonstrate that the hybrid model proposed in this study outperforms the traditional AR and SVR models.


Sign in / Sign up

Export Citation Format

Share Document