scholarly journals Templates vs. Stochastic Methods

Author(s):  
Benedikt Gierlichs ◽  
Kerstin Lemke-Rust ◽  
Christof Paar
Keyword(s):  
2005 ◽  
Vol 10 (1) ◽  
pp. 65-75 ◽  
Author(s):  
Z. Kala

The load-carrying capacity of the member with imperfections under axial compression is analysed in the present paper. The study is divided into two parts: (i) in the first one, the input parameters are considered to be random numbers (with distribution of probability functions obtained from experimental results and/or tolerance standard), while (ii) in the other one, the input parameters are considered to be fuzzy numbers (with membership functions). The load-carrying capacity was calculated by geometrical nonlinear solution of a beam by means of the finite element method. In the case (ii), the membership function was determined by applying the fuzzy sets, whereas in the case (i), the distribution probability function of load-carrying capacity was determined. For (i) stochastic solution, the numerical simulation Monte Carlo method was applied, whereas for (ii) fuzzy solution, the method of the so-called α cuts was applied. The design load-carrying capacity was determined according to the EC3 and EN1990 standards. The results of the fuzzy, stochastic and deterministic analyses are compared in the concluding part of the paper.


2016 ◽  
Vol 20 (3) ◽  
pp. 74-79
Author(s):  
E.A. Abidova ◽  
L.S. Hegay ◽  
A.V. Chernov ◽  
V.A. Bulava ◽  
O.Yu. Pugachyova ◽  
...  

1979 ◽  
Vol 14 (1) ◽  
pp. 89-109
Author(s):  
B. Coupal ◽  
M. de Broissia

Abstract The movement of oil slicks on open waters has been predicted, using both deterministic and stochastic methods. The first method, named slick rose, consists in locating an area specifying the position of the slick during the first hours after the spill. The second method combines a deterministic approach for the simulation of current parameters to a stochastic method simulating the wind parameters. A Markov chain of the first order followed by a Monte Carlo approach enables the simulation of both phenomena. The third method presented in this paper describes a mass balance on the spilt oil, solved by the method of finite elements. The three methods are complementary to each other and constitute an important point for a contingency plan.


1997 ◽  
Vol 36 (5) ◽  
pp. 19-26 ◽  
Author(s):  
J. L. Jacobsen ◽  
H. Madsen ◽  
P. Harremoès

The objective of the paper is to interpret data on water level variation in a river affected by overflow from a sewer system during rain. The simplest possible, hydraulic description is combined with stochastic methods for data analysis and model parameter estimation. This combination of deterministic and stochastic interpretation is called grey box modelling. As a deterministic description the linear reservoir approximation is used. A series of linear reservoirs in sufficient number will approximate a plug flow reactor. The choice of number is an empirical expression of the longitudinal dispersion in the river. This approximation is expected to be a sufficiently good approximation as a tool for the ultimate aim: the description of pollutant transport in the river. The grey box modelling involves a statistical tool for estimation of the parameters in the deterministic model. The advantage is that the parameters have physical meaning, as opposed to many other statistically estimated, empirical parameters. The identifiability of each parameter, the uncertainty of the parameter estimation and the overall uncertainty of the simulation are determined.


1976 ◽  
Vol 6 (4) ◽  
pp. 343-353
Author(s):  
Lorne G. Everett ◽  
Guenton C. Slawson
Keyword(s):  

Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1488
Author(s):  
Damian Trofimowicz ◽  
Tomasz P. Stefański

In this paper, novel methods for the evaluation of digital-filter stability are investigated. The methods are based on phase analysis of a complex function in the characteristic equation of a digital filter. It allows for evaluating stability when a characteristic equation is not based on a polynomial. The operation of these methods relies on sampling the unit circle on the complex plane and extracting the phase quadrant of a function value for each sample. By calculating function-phase quadrants, regions in the immediate vicinity of unstable roots (i.e., zeros), called candidate regions, are determined. In these regions, both real and imaginary parts of complex-function values change signs. Then, the candidate regions are explored. When the sizes of the candidate regions are reduced below an assumed accuracy, then filter instability is verified with the use of discrete Cauchy’s argument principle. Three different algorithms of the unit-circle sampling are benchmarked, i.e., global complex roots and poles finding (GRPF) algorithm, multimodal genetic algorithm with phase analysis (MGA-WPA), and multimodal particle swarm optimization with phase analysis (MPSO-WPA). The algorithms are compared in four benchmarks for integer- and fractional-order digital filters and systems. Each algorithm demonstrates slightly different properties. GRPF is very fast and efficient; however, it requires an initial number of nodes large enough to detect all the roots. MPSO-WPA prevents missing roots due to the usage of stochastic space exploration by subsequent swarms. MGA-WPA converges very effectively by generating a small number of individuals and by limiting the final population size. The conducted research leads to the conclusion that stochastic methods such as MGA-WPA and MPSO-WPA are more likely to detect system instability, especially when they are run multiple times. If the computing time is not vitally important for a user, MPSO-WPA is the right choice, because it significantly prevents missing roots.


2021 ◽  
Vol 15 (1) ◽  
pp. 408-433
Author(s):  
Margaux Dugardin ◽  
Werner Schindler ◽  
Sylvain Guilley

Abstract Extra-reductions occurring in Montgomery multiplications disclose side-channel information which can be exploited even in stringent contexts. In this article, we derive stochastic attacks to defeat Rivest-Shamir-Adleman (RSA) with Montgomery ladder regular exponentiation coupled with base blinding. Namely, we leverage on precharacterized multivariate probability mass functions of extra-reductions between pairs of (multiplication, square) in one iteration of the RSA algorithm and that of the next one(s) to build a maximum likelihood distinguisher. The efficiency of our attack (in terms of required traces) is more than double compared to the state-of-the-art. In addition to this result, we also apply our method to the case of regular exponentiation, base blinding, and modulus blinding. Quite surprisingly, modulus blinding does not make our attack impossible, and so even for large sizes of the modulus randomizing element. At the cost of larger sample sizes our attacks tolerate noisy measurements. Fortunately, effective countermeasures exist.


Sign in / Sign up

Export Citation Format

Share Document