Empirical earthquake probabilities from observed recurrence intervals

1994 ◽  
Vol 84 (1) ◽  
pp. 219-221 ◽  
Author(s):  
J. C. Savage

Abstract The probability p that a given fault segment will rupture within a specified time T following the preceding rupture is evaluated empirically from a sample of observed recurrence intervals for that fault segment. All that is assumed is that the probability of rupture within the specified time interval is the same for all rupture cycles on that segment. Suppose that m of the n observed recurrence intervals correspond to cycles in which rupture occurred within the interval T following the preceding earthquake. The probability density that rupture in the current cycle will also fall within the interval T following the most recent earthquake is then given by the beta distribution P(p|m, n) = {(n + 1)!/[m!(n − m!]}pm(1 − p)n−m. The best estimate of the desired probability pis 〈p〉 = (m + 1)/(n + 2), and a measure of the breadth of the distribution is the standard deviation σ = [〈p〉 (1 − 〈p〉)/(n + 3)]1/2. Because it is unlikely that the number n of observed recurrence intervals will be much greater than 10, the probability generally will not be defined more closely than ±0.2. Moreover, increasing n decreases the uncertainty only very slowly.

2020 ◽  
Vol 206 ◽  
pp. 01011
Author(s):  
Li Hong

In this paper, we take the Junction of Shanxi-Hebei-Inner Mongolia area as study region using earthquake corresponding relevancy spectrum method (ECRS method) to identify comprehensive precursory anomalies before moderate-strong earthquake. On base of single-parameter relevancy spectrum database with target earthquake magnitude as Ms4.7 and initial earthquake magnitude as Ms1, we carry on multi-parameter analysis and find that result with time interval of 9 months and anomaly threshold with 0.40 times standard deviation has better prediction efficiency. Its anomaly corresponding rate and earthquake corresponding rate are 6/10 and 9/9 respectively.


Author(s):  
Frederic A. Holland

The beta distribution is a particularly convenient model for random variables when only the minimum, maximum and most likely values are available. It is also very useful for estimating the mean and standard deviation given this information. In this paper a simple method is proposed to estimate the beta parameters from these three values. The proposed method has advantages over the conventional approach. In the conventional approach, the four parameters of the beta distribution are determined from only three values by assuming a standard deviation that is one-sixth the range. In contrast, the new method assumes a value for one of the beta shape parameters based on an analogy with the normal distribution. This new approach allows for a very simple algebraic solution of the beta shape parameters in contrast to the simultaneous solution required by the conventional method. The results of the proposed method are very similar to the conventional method. However, the proposed method generally gives a slightly higher (more conservative) estimate of the standard deviation when the distribution is skewed. In addition, the new approach allows the standard deviation to vary as the shape or skew of the distribution varies. Both methods were applied to modeling the probability distribution of temperature.


2020 ◽  
Vol 27 (2) ◽  
pp. 8-15
Author(s):  
J.A. Oyewole ◽  
F.O. Aweda ◽  
D. Oni

There is a crucial need in Nigeria to enhance the development of wind technology in order to boost our energy supply. Adequate knowledge about the wind speed distribution becomes very essential in the establishment of Wind Energy Conversion Systems (WECS). Weibull Probability Density Function (PDF) with two parameters is widely accepted and is commonly used for modelling, characterizing and predicting wind resource and wind power, as well as assessing optimum performance of WECS. Therefore, it is paramount to precisely estimate the scale and shape parameters for all regions or sites of interest. Here, wind data from year 2000 to 2010 for four different locations (Port Harcourt, Ikeja, Kano and Jos) were analysed and the Weibull parameters was determined. The three methods employed are Mean Standard Deviation Method (MSDM), Energy Pattern Factor Method (EPFM) and Method of Moments (MOM) for estimating Weibull parameters. The method that gave the most accurate estimation of the wind speed was MSDM method, while Energy Pattern Factor Method (EPFM) is the most reliable and consistent method for estimating probability density function of wind. Keywords: Weibull Distribution, Method of Moment, Mean Standard Deviation Method, Energy Pattern Method


Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 2000
Author(s):  
Domingo Benítez ◽  
Gustavo Montero ◽  
Eduardo Rodríguez ◽  
David Greiner ◽  
Albert Oliver ◽  
...  

A novel phenomenological epidemic model is proposed to characterize the state of infectious diseases and predict their behaviors. This model is given by a new stochastic partial differential equation that is derived from foundations of statistical physics. The analytical solution of this equation describes the spatio-temporal evolution of a Gaussian probability density function. Our proposal can be applied to several epidemic variables such as infected, deaths, or admitted-to-the-Intensive Care Unit (ICU). To measure model performance, we quantify the error of the model fit to real time-series datasets and generate forecasts for all the phases of the COVID-19, Ebola, and Zika epidemics. All parameters and model uncertainties are numerically quantified. The new model is compared with other phenomenological models such as Logistic Grow, Original, and Generalized Richards Growth models. When the models are used to describe epidemic trajectories that register infected individuals, this comparison shows that the median RMSE error and standard deviation of the residuals of the new model fit to the data are lower than the best of these growing models by, on average, 19.6% and 35.7%, respectively. Using three forecasting experiments for the COVID-19 outbreak, the median RMSE error and standard deviation of residuals are improved by the performance of our model, on average by 31.0% and 27.9%, respectively, concerning the best performance of the growth models.


The construction, operation, and testing of the standard are described. The resonance employed is that due to the hyperfine splitting of caesium, having a frequency of approximately 9192 Mc/s. The transitions between the two atomic states F, m f (4,0) and F, m F (3, 0) are detected in an atomic-beam chamber, in which the length of the transition region is 47 cm, giving a width of resonance, at half deflexion, of 350 cycles, and a standard deviation of setting to the peak of the resonance of ± l c/s . It is shown that the geometrical parameters of the beam chamber such as slit widths, alinement of the beam, and shape of the pole-pieces of the deflecting magnets are relatively unimportant, and that other parameters, including the pressure in the beam chamber, the temperature of the oven, from which the caesium atoms are evaporated, and the radio-frequency power exciting the transitions can be varied throughout wide limits without causing changes in resonant frequency exceeding 1 part in 10 10 . A unidirectional magnetic field is applied over the transition region to remove the field-dependent resonant lines of the Zeeman pattern from the central line which depends on the field to only a second-order extent. It has been found that a satisfactory resonance is obtained with a field as low as 0.05 Oe at which the total effect of the field on the frequency is only 1 c/s. The dependence of the frequency on the phase conditions in the two-cavity resonators carrying the exciting field is studied, and it is concluded that the phases can be made sufficiently close to enable the frequency to be defined with a precision of ± 1 part in 10 10 . The resonator is used as a passive instrument to calibrate the quartz clocks, usually at intervals of a few days; and it is estimated that the clocks calibrated in this way provide at all times the atomic unit of frequency and time interval with a standard deviation of ± 2 parts in 10 10 . The quartz clocks are also calibrated in terms of astronomical time and the results are compared for the period from June 1955 to June 1956. For operational purposes the frequency of the resonance was taken as 9 192 631 830 c/s which was the value obtained in terms of the unit of uniform astronomical time made available by the Royal Greenwich Observatory in June 1955. The value is being determined in terms of the second of ephemeris time, which has now been adopted by the International Committee of Weights and Measures as the unit of time, but to obtain the accuracy required the comparison must be extended over a long interval in view of the difficulties associated with the astronomical measurements.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 769 ◽  
Author(s):  
Weixing Dai ◽  
Dianjing Guo

Analysis of high-dimensional data is a challenge in machine learning and data mining. Feature selection plays an important role in dealing with high-dimensional data for improvement of predictive accuracy, as well as better interpretation of the data. Frequently used evaluation functions for feature selection include resampling methods such as cross-validation, which show an advantage in predictive accuracy. However, these conventional methods are not only computationally expensive, but also tend to be over-optimistic. We propose a novel cross-entropy which is based on beta distribution for feature selection. In beta distribution-based cross-entropy (BetaDCE) for feature selection, the probability density is estimated by beta distribution and the cross-entropy is computed by the expected value of beta distribution, so that the generalization ability can be estimated more precisely than conventional methods where the probability density is learnt from data. Analysis of the generalization ability of BetaDCE revealed that it was a trade-off between bias and variance. The robustness of BetaDCE was demonstrated by experiments on three types of data. In the exclusive or-like (XOR-like) dataset, the false discovery rate of BetaDCE was significantly smaller than that of other methods. For the leukemia dataset, the area under the curve (AUC) of BetaDCE on the test set was 0.93 with only four selected features, which indicated that BetaDCE not only detected the irrelevant and redundant features precisely, but also more accurately predicted the class labels with a smaller number of features than the original method, whose AUC was 0.83 with 50 features. In the metabonomic dataset, the overall AUC of prediction with features selected by BetaDCE was significantly larger than that by the original reported method. Therefore, BetaDCE can be used as a general and efficient framework for feature selection.


Metrologiya ◽  
2020 ◽  
pp. 15-27
Author(s):  
Aleksandr V. Lapko ◽  
Vasiliy A. Lapko

When substantiating the method of fast selection of the bandwidth of kernel probability density estimates, a constant was found that is a functional of the second density derivative. In this paper, the obtained result is generalized to derivatives of symmetric probability densities of different orders. The functional dependences of the constants under study on the coeffi cient of antikurtosis of a random variable are established. The regularities peculiar to them are investigated. Based on the results obtained, a method for estimating functionals from derived probability densities has been developed, which involves the following actions. In the original sample estimated standard deviation of the one-dimensional random variables and the coeffi cient of antikurtosis. Using the reconstructed functional dependences on the antikurtosis coeffi cient, the constants are estimated, which are functionals of the derivatives of the probability density. With known estimates of the standard deviation of the investigated random variable and the considered constant, the values of the functional from the derivative of the probability density of the selected order are calculated. The obtained results are confi rmed by the analysis of the data of computational experiments. It is established that with increasing order of the derivative, the values of the estimates of the studied functionals increase. This fact is explained by the complication of the integrand function in the considered functionals. The proposed method provides objective results for the fi rst three derivatives of the probability density of a random variable. The obtained conclusions are confi rmed by the results of the confi dence estimation of the investigated functionals.


Sign in / Sign up

Export Citation Format

Share Document