scholarly journals Earthquake forecast enrichment scores

2012 ◽  
Vol 2 (1) ◽  
pp. 2 ◽  
Author(s):  
Christine Smyth ◽  
Masumi Yamada ◽  
Jim Mori

The Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project aimed at testing earthquake forecast models in a fair environment. Various metrics are currently used to evaluate the submitted forecasts. However, the CSEP still lacks easily understandable metrics with which to rank the universal performance of the forecast models. In this research, we modify a well-known and respected metric from another statistical field, bioinformatics, to make it suitable for evaluating earthquake forecasts, such as those submitted to the CSEP initiative. The metric, originally called a <em>gene-set enrichment score</em>, is based on a Kolmogorov-Smirnov statistic. Our modified metric assesses if, over a certain time period, the forecast values at locations where earthquakes have occurred are significantly increased compared to the values for all locations where earthquakes did not occur. Permutation testing allows for a significance value to be placed upon the score. Unlike the metrics currently employed by the CSEP, the score places no assumption on the distribution of earthquake occurrence nor requires an arbitrary reference forecast. In this research, we apply the modified metric to simulated data and real forecast data to show it is a powerful and robust technique, capable of ranking competing earthquake forecasts.

2021 ◽  
Author(s):  
Jose A. Bayona ◽  
William Savran ◽  
Maximilian Werner ◽  
David A. Rhoades

&lt;p&gt;Developing testable seismicity models is essential for robust seismic hazard assessments and to quantify the predictive skills of posited hypotheses about seismogenesis. On this premise, the Regional Earthquake Likelihood Models (RELM) group designed a joint forecasting experiment, with associated models, data and tests to evaluate earthquake predictability in California over a five-year period. Participating RELM forecast models were based on a range of geophysical datasets, including earthquake catalogs, interseismic strain rates, and geologic fault slip rates. After five years of prospective evaluation, the RELM experiment found that the smoothed seismicity (HKJ) model by Helmstetter et al. (2007) was the most informative. The diversity of competing forecast hypotheses in RELM was suitable for combining multiple models that could provide more informative earthquake forecasts than HKJ. Thus, Rhoades et al. (2014) created multiplicative hybrid models that involve the HKJ model as a baseline and one or more conjugate models. Particularly, the authors fitted two parameters for each conjugate model and an overall normalizing constant to optimize each hybrid model. Then, information gain scores per earthquake were computed using a corrected Akaike Information Criterion that penalized for the number of fitted parameters. According to retrospective analyses, some hybrid models showed significant information gains over the HKJ forecast, despite the penalty. Here, we assess in a prospective setting the predictive skills of 16 hybrids and 6 original RELM forecasts, using a suite of tests of the Collaboratory for the Study of Earthquake Predicitability (CSEP). The evaluation dataset contains 40 M&amp;#8805;4.95 events recorded within the California CSEP-testing region from 1 January 2011 to 31 December 2020, including the 2016 Mw 5.6, 5.6, and 5.5 Hawthorne earthquake swarm, and the Mw 6.4 foreshock and Mw 7.1 mainshock from the 2019 Ridgecrest sequence. We evaluate the consistency between the observed and the expected number, spatial, likelihood and magnitude distributions of earthquakes, and compare the performance of each forecast to that of HKJ. Our prospective test results show that none of the hybrid models are significantly more informative than the HKJ baseline forecast. These results are mainly due to the occurrence of the 2016 Hawthorne earthquake cluster, and four events from the 2019 Ridgecrest sequence in two forecast bins. These clusters of seismicity are exceptionally unlikely in all models, and insufficiently captured by the Poisson distribution that the likelihood functions of tests assume. Therefore, we are currently examining alternative likelihood functions that reduce the sensitivity of the evaluations to clustering, and that could be used to better understand whether the discrepancies between prospective and retrospective test results for multiplicative hybrid forecasts are due to limitations of the tests or the methods used to create the hybrid models.&amp;#160;&lt;/p&gt;


2018 ◽  
Vol 8 (9) ◽  
pp. 1674
Author(s):  
Wengang Chen ◽  
Wenzheng Xiu ◽  
Jin Shen ◽  
Wenwen Zhang ◽  
Min Xu ◽  
...  

By using different weights to deal with the autocorrelation function data of every delay time period, the information utilization of dynamic light scattering can be obviously enhanced in the information-weighted constrained regularization inversion, but the denoising ability and the peak resolution under noise conditions for information-weighted inversion algorithm are still insufficient. On the basis of information weighting, we added a penalty term with the function of flatness constraints to the objective function of the regularization inversion, and performed the inversion of multiangle dynamic light scattering data, including the simulated data of bimodal distribution particles (466/915 nm, 316/470 nm) and trimodal distribution particles (324/601/871 nm), and the measured data of bimodal distribution particles (306/974 nm, 300/502 nm). The results of the inversion show that multiple-penalty-weighted regularization inversion can not only improve the utilization of the particle size information, but also effectively eliminate the false peaks and burrs in the inversed particle size distributions, and further improve the resolution of peaks in the noise conditions, and then improve the weighting effects of the information-weighted inversion.


2017 ◽  
Vol 59 (6) ◽  
Author(s):  
Matteo Taroni ◽  
Warner Marzocchi ◽  
Pamela Roselli

<p>The quantitative assessment of the performance of earthquake prediction and/or forecast models is essential for evaluating their applicability for risk reduction purposes. Here we assess the earthquake prediction performance of the CN model applied to the Italian territory. This model has been widely publicized in Italian news media, but a careful assessment of its prediction performance is still lacking. In this paper we evaluate the results obtained so far from the CN algorithm applied to the Italian territory, by adopting widely used testing procedures and under development in the Collaboratory for the Study of Earthquake Predictability (CSEP) network. Our results show that the CN prediction performance is comparable to the prediction performance of the stationary Poisson model, that is, CN predictions do not add more to what may be expected from random chance.</p>


2020 ◽  
Vol 9 (7) ◽  
pp. 440
Author(s):  
Junfang Gong ◽  
Jay Lee ◽  
Shunping Zhou ◽  
Shengwen Li

Human activity events are often recorded with their geographic locations and temporal stamps, which form spatial patterns of the events during individual time periods. Temporal attributes of these events help us understand the evolution of spatial processes over time. A challenge that researchers still face is that existing methods tend to treat all events as the same when evaluating the spatiotemporal pattern of events that have different properties. This article suggests a method for assessing the level of spatiotemporal clustering or spatiotemporal autocorrelation that may exist in a set of human activity events when they are associated with different categorical attributes. This method extends the Voronoi structure from 2D to 3D and integrates a sliding-window model as an approach to spatiotemporal tessellations of a space-time volume defined by a study area and time period. Furthermore, an index was developed to evaluate the partial spatiotemporal clustering level of one of the two event categories against the other category. The proposed method was applied to simulated data and a real-world dataset as a case study. Experimental results show that the method effectively measures the level of spatiotemporal clustering patterns among human activity events of multiple categories. The method can be applied to the analysis of large volumes of human activity events because of its computational efficiency.


Author(s):  
Konstantina Charmpi ◽  
Bernard Ycart

AbstractGene Set Enrichment Analysis (GSEA) is a basic tool for genomic data treatment. Its test statistic is based on a cumulated weight function, and its distribution under the null hypothesis is evaluated by Monte-Carlo simulation. Here, it is proposed to subtract to the cumulated weight function its asymptotic expectation, then scale it. Under the null hypothesis, the convergence in distribution of the new test statistic is proved, using the theory of empirical processes. The limiting distribution needs to be computed only once, and can then be used for many different gene sets. This results in large savings in computing time. The test defined in this way has been called Weighted Kolmogorov Smirnov (WKS) test. Using expression data from the GEO repository, tested against the MSig Database C2, a comparison between the classical GSEA test and the new procedure has been conducted. Our conclusion is that, beyond its mathematical and algorithmic advantages, the WKS test could be more informative in many cases, than the classical GSEA test.


2019 ◽  
Vol 21 (1) ◽  
pp. 12-20 ◽  
Author(s):  
Didit Budi Nugroho ◽  
Bambang Susanto ◽  
Kezia Natalia Putri Prasetia ◽  
Rebecca Rorimpandey

This study proposed two new classes of GARCH(1,1) model by applying the Tukeytransformations to the returns and to the lagged variance. The behavior of return volatility was investigated on the basis of models with normal and Student-t distributions for return error. The competing models were estimated by using the Excel Solver and Matlab tools. The empirical analysis is based on simulated data, daily exchange rates of the IDR/USD, and daily stock indices of FTSE100 and TOPIX. This study recommends the use of Excel Solver for finance academics and practitioners working on volatility using GARCH(1,1) models. Our empirical findings conclude that GARCH(1,1) models under Tukey transformations should be considered in risk management decisions since the models are more appropriate than standard for describing returns and volatility of financial time series and its stylized facts including fat tails and mean reverting. The Tukey transformed returns imply a shorter volatility half-life, and thus this study suggests that investors should invest the observed assets in a shorter time period to obtain higher returns.


Sign in / Sign up

Export Citation Format

Share Document