scholarly journals Bayesian Estimation of the ETAS Model for Earthquake Occurrences

Author(s):  
Gordon J. Ross

ABSTRACT The epidemic-type aftershock sequence (ETAS) model is widely used in seismic forecasting. However, most studies of ETAS use point estimates for the model parameters, which ignores the inherent uncertainty that arises from estimating these from historical earthquake catalogs, resulting in misleadingly optimistic forecasts. In contrast, Bayesian statistics allows parameter uncertainty to be explicitly represented and fed into the forecast distribution. Despite its growing popularity in seismology, the application of Bayesian statistics to the ETAS model has been limited by the complex nature of the resulting posterior distribution, which makes it infeasible to apply to catalogs containing more than a few hundred earthquakes. To combat this, we develop a new framework for estimating the ETAS model in a fully Bayesian manner, which can be efficiently scaled up to large catalogs containing thousands of earthquakes. We also provide easy-to-use software that implements our method.

2015 ◽  
Vol 57 (6) ◽  
Author(s):  
Maura Murru ◽  
Jiancang Zhuang ◽  
Rodolfo Console ◽  
Giuseppe Falcone

<div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p>In this paper, we compare the forecasting performance of several statistical models, which are used to describe the occurrence process of earthquakes in forecasting the short-term earthquake probabilities during the L’Aquila earthquake sequence in central Italy in 2009. These models include the Proximity to Past Earthquakes (PPE) model and two versions of the Epidemic Type Aftershock Sequence (ETAS) model. We used the information gains corresponding to the Poisson and binomial scores to evaluate the performance of these models. It is shown that both ETAS models work better than the PPE model. However, in comparing the two types of ETAS models, the one with the same fixed exponent coefficient (<span>alpha)</span> = 2.3 for both the productivity function and the scaling factor in the spatial response function (ETAS I), performs better in forecasting the active aftershock sequence than the model with different exponent coefficients (ETAS II), when the Poisson score is adopted. ETAS II performs better when a lower magnitude threshold of 2.0 and the binomial score are used. The reason is found to be that the catalog does not have an event of similar magnitude to the L’Aquila mainshock (M<sub>w</sub> 6.3) in the training period (April 16, 2005 to March 15, 2009), and the (<span>alpha)</span>-value is underestimated, thus the forecast seismicity is underestimated when the productivity function is extrapolated to high magnitudes. We also investigate the effect of the inclusion of small events in forecasting larger events. These results suggest that the training catalog used for estimating the model parameters should include earthquakes of magnitudes similar to the mainshock when forecasting seismicity during an aftershock sequence.</p></div></div></div>


2020 ◽  
Vol 91 (3) ◽  
pp. 1567-1578 ◽  
Author(s):  
Kevin R. Milner ◽  
Edward H. Field ◽  
William H. Savran ◽  
Morgan T. Page ◽  
Thomas H. Jordan

Abstract The first Uniform California Earthquake Rupture Forecast, Version 3–epidemic-type aftershock sequence (UCERF3-ETAS) aftershock simulations were running on a high-performance computing cluster within 33 min of the 4 July 2019 M 6.4 Searles Valley earthquake. UCERF3-ETAS, an extension of the third Uniform California Earthquake Rupture Forecast (UCERF3), is the first comprehensive, fault-based, epidemic-type aftershock sequence (ETAS) model. It produces ensembles of synthetic aftershock sequences both on and off explicitly modeled UCERF3 faults to answer a key question repeatedly asked during the Ridgecrest sequence: What are the chances that the earthquake that just occurred will turn out to be the foreshock of an even bigger event? As the sequence unfolded—including one such larger event, the 5 July 2019 M 7.1 Ridgecrest earthquake almost 34 hr later—we updated the model with observed aftershocks, finite-rupture estimates, sequence-specific parameters, and alternative UCERF3-ETAS variants. Although configuring and running UCERF3-ETAS at the time of the earthquake was not fully automated, considerable effort had been focused in 2018 on improving model documentation and ease of use with a public GitHub repository, command line tools, and flexible configuration files. These efforts allowed us to quickly respond and efficiently configure new simulations as the sequence evolved. Here, we discuss lessons learned during the Ridgecrest sequence, including sensitivities of fault triggering probabilities to poorly constrained finite-rupture estimates and model assumptions, as well as implications for UCERF3-ETAS operationalization.


2020 ◽  
Vol 110 (2) ◽  
pp. 874-885
Author(s):  
David Marsan ◽  
Yen Joe Tan

ABSTRACT We define a seismicity model based on (1) the epidemic-type aftershock sequence model that accounts for earthquake clustering, and (2) a closed slip budget at long timescale. This is achieved by not permitting an earthquake to have a seismic moment greater than the current seismic moment deficit. This causes the Gutenberg–Richter law to be modulated by a smooth upper cutoff, the location of which can be predicted from the model parameters. We investigate the various regimes of this model that more particularly include a regime in which the activity does not die off even with a vanishingly small spontaneous (i.e., background) earthquake rate and one that bears strong statistical similarities with repeating earthquake time series. Finally, this model relates the earthquake rate and the geodetic moment rate and, therefore, allows to make sense of this relationship in terms of fundamental empirical law (the Gutenberg–Richter law, the productivity law, and the Omori law) and physical parameters (seismic coupling, tectonic loading rate).


Author(s):  
G Petrillo ◽  
E Lippiello

Summary The Epidemic Type Aftershock Sequence (ETAS) model provides a good description of the post-seismic spatio-temporal clustering of seismicity and is also able to capture some features of the increase of seismic activity caused by foreshocks. Recent results, however, have shown that the number of foreshocks observed in instrumental catalogs is significantly much larger than the one predicted by the ETAS model. Here we show that it is possible to keep an epidemic description of post-seismic activity and, at the same time, to incorporate pre-seismic temporal clustering, related to foreshocks. Taking also into-account the short-term incompleteness of instrumental catalogs, we present a model which achieves very good description of the southern California seismicity both on the aftershock and on the foreshock side. Our results indicate that the existence of a preparatory phase anticipating mainshocks represents the most plausible explanation for the occurrence of foreshocks.


Author(s):  
Eugenio Lippiello ◽  
Cataldo Godano ◽  
Lucilla De Arcangelis

An increase of seismic activity is often observed before large earthquakes. Events responsible for this increase are usually named foreshock and their occurrence probably represents the most reliable precursory pattern. Many foreshocks statistical features can be interpreted in terms of the standard mainshock-to-aftershock triggering process and are recovered in the Epidemic Type Aftershock Sequence ETAS model. Here we present a statistical study of instrumental seismic catalogs from four different geographic regions. We focus on some common features of foreshocks in the four catalogs which cannot be reproduced by the ETAS model. In particular we find in instrumental catalogs a significantly larger number of foreshocks than the one predicted by the ETAS model. We show that this foreshock excess cannot be attributed to catalog incompleteness. We therefore propose a generalized formulation of the ETAS model, the ETAFS model, which explicitly includes foreshock occurrence. Statistical features of aftershocks and foreshocks in the ETAFS model are in very good agreement with instrumental results.


Entropy ◽  
2019 ◽  
Vol 21 (2) ◽  
pp. 173 ◽  
Author(s):  
Eugenio Lippiello ◽  
Cataldo Godano ◽  
Lucilla de Arcangelis

An increase of seismic activity is often observed before large earthquakes. Events responsible for this increase are usually named foreshock and their occurrence probably represents the most reliable precursory pattern. Many foreshocks statistical features can be interpreted in terms of the standard mainshock-to-aftershock triggering process and are recovered in the Epidemic Type Aftershock Sequence ETAS model. Here we present a statistical study of instrumental seismic catalogs from four different geographic regions. We focus on some common features of foreshocks in the four catalogs which cannot be reproduced by the ETAS model. In particular we find in instrumental catalogs a significantly larger number of foreshocks than the one predicted by the ETAS model. We show that this foreshock excess cannot be attributed to catalog incompleteness. We therefore propose a generalized formulation of the ETAS model, the ETAFS model, which explicitly includes foreshock occurrence. Statistical features of aftershocks and foreshocks in the ETAFS model are in very good agreement with instrumental results.


2018 ◽  
Vol 66 (6) ◽  
pp. 1359-1373 ◽  
Author(s):  
Nader Davoudi ◽  
Hamid Reza Tavakoli ◽  
Mehdi Zare ◽  
Abdollah Jalilian

2019 ◽  
Vol 109 (6) ◽  
pp. 2356-2366 ◽  
Author(s):  
Ganyu Teng ◽  
Jack W. Baker

Abstract This study is an evaluation of the suitability of several declustering method for induced seismicity and their impacts on hazard analysis of the Oklahoma–Kansas region. We considered the methods proposed by Gardner and Knopoff (1974), Reasenberg (1985), Zaliapin and Ben‐Zion (2013), and the stochastic declustering method (Zhuang et al., 2002) based on the epidemic‐type aftershock sequence (ETAS) model (Ogata, 1988, 1998). The results show that the choice of declustering method has a significant impact on the declustered catalog and the resulting hazard analysis of the Oklahoma–Kansas region. The Gardner and Knopoff method, which is currently implemented in the U.S. Geological Survey one‐year seismic‐hazard forecast for the central and eastern United States, has unexpected features when used for this induced seismicity catalog. It removes 80% of earthquakes and fails to reflect the changes in background rates that have occurred in the past few years. This results in a slight increase in the hazard level from 2016 to 2017, despite a decrease in seismic activities in 2017. The Gardner and Knopoff method also frequently identifies aftershocks with much stronger shaking intensities than their associated mainshocks. These features are mostly due to the window method implemented in the Gardner and Knopoff method. Compared with the Gardner and Knopoff method, the other three methods are able to capture the changing hazard level in the region. However, the ETAS model potentially overestimates the foreshock effect and generates negligible probabilities of large earthquakes being mainshocks. The Reasenberg and Zaliapin and Ben‐Zion methods have similar performance on catalog declustering and hazard analysis. Compared with the ETAS method, these two methods are easier to implement and faster to generate the declustered catalog. The results from this study suggest that both Reasenberg and Zaliapin and Ben‐Zion declustering methods are suitable for declustering and hazard analysis for induced seismicity in the Oklahoma–Kansas region.


2020 ◽  
Author(s):  
Christian Grimm ◽  
Martin Käser ◽  
Helmut Küchenhoff

&lt;p&gt;While Probabilistic Seismic Hazard Assessment is commonly based on earthquake catalogues in a declustered form, ongoing seismicity in aftershock sequences is known to be able to add significant hazard, which can also increase the damage potential to already affected structures in risk assessment. Especially so-called earthquake doublets (multiplets), i.e. a cluster mainshock being followed or preceded by one (or more) events with a similarly strong magnitude occurring within pre-defined temporal and spatial limits, can cause loss multiplication effects to the insurance industry, which therefore has a pronounced interest in investigating the frequency of earthquake doublets to happen worldwide. A widely used method to analyse and simulate the triggering process of earthquake sequences is the Epidemic Type Aftershock Sequence (ETAS) model. We estimate the ETAS model parameters for some regional areas and produce synthetic catalogues, which are then analysed particularly with respect to the occurrence of earthquake doublets and compared to the observed history. Also, different seismic subduction-type regions in the world are pointed out to have shown differing relative frequencies of earthquake doublets. Regression models are used to study whether certain mainshock and local, geophysical properties such as magnitude, dip and rake angle, depth, distance to subduction plate interface and velocity of converging subduction plates nearby show explanatory power for the probability of a cluster containing an earthquake doublet.&lt;/p&gt;


2021 ◽  
Author(s):  
Christian Grimm ◽  
Sebastian Hainzl ◽  
Martin Käser ◽  
Helmut Küchenhoff

Abstract Strong earthquakes cause aftershock sequences that are clustered in time according to a power decay law, and in space along their extended rupture, shaping a typically elongate pattern of aftershock locations. A widely used approach to model seismic clustering is the Epidemic Type Aftershock Sequence (ETAS) model, that shows three major biases: First, the conventional ETAS approach assumes isotropic spatial triggering, which stands in conflict with observations and geophysical arguments for strong earthquakes. Second, the spatial kernel has unlimited extent, allowing smaller events to exert disproportionate trigger potential over an unrealistically large area. Third, the ETAS model assumes complete event records and neglects inevitable short-term aftershock incompleteness as a consequence of overlapping coda waves. These three effects can substantially bias the parameter estimation and particularly lead to underestimated cluster sizes. In this article, we combine the approach of Grimm (2021), which introduced a generalized anisotropic and locally restricted spatial kernel, with the ETAS-Incomplete (ETASI) time model of Hainzl (2021), to define an ETASI space-time model with flexible spatial kernel that solves the abovementioned shortcomings. We apply different model versions to a triad of forecasting experiments of the 2019 Ridgecrest sequence, and evaluate the prediction quality with respect to cluster size, largest aftershock magnitude and spatial distribution. The new model provides the potential of more realistic simulations of on-going aftershock activity, e.g.~allowing better predictions of the probability and location of a strong, damaging aftershock, which might be beneficial for short term risk assessment and desaster response.


Sign in / Sign up

Export Citation Format

Share Document