Maximum Earthquake Size and Seismicity Rate from an ETAS Model with Slip Budget

2020 ◽  
Vol 110 (2) ◽  
pp. 874-885
Author(s):  
David Marsan ◽  
Yen Joe Tan

ABSTRACT We define a seismicity model based on (1) the epidemic-type aftershock sequence model that accounts for earthquake clustering, and (2) a closed slip budget at long timescale. This is achieved by not permitting an earthquake to have a seismic moment greater than the current seismic moment deficit. This causes the Gutenberg–Richter law to be modulated by a smooth upper cutoff, the location of which can be predicted from the model parameters. We investigate the various regimes of this model that more particularly include a regime in which the activity does not die off even with a vanishingly small spontaneous (i.e., background) earthquake rate and one that bears strong statistical similarities with repeating earthquake time series. Finally, this model relates the earthquake rate and the geodetic moment rate and, therefore, allows to make sense of this relationship in terms of fundamental empirical law (the Gutenberg–Richter law, the productivity law, and the Omori law) and physical parameters (seismic coupling, tectonic loading rate).

2015 ◽  
Vol 57 (6) ◽  
Author(s):  
Maura Murru ◽  
Jiancang Zhuang ◽  
Rodolfo Console ◽  
Giuseppe Falcone

<div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p>In this paper, we compare the forecasting performance of several statistical models, which are used to describe the occurrence process of earthquakes in forecasting the short-term earthquake probabilities during the L’Aquila earthquake sequence in central Italy in 2009. These models include the Proximity to Past Earthquakes (PPE) model and two versions of the Epidemic Type Aftershock Sequence (ETAS) model. We used the information gains corresponding to the Poisson and binomial scores to evaluate the performance of these models. It is shown that both ETAS models work better than the PPE model. However, in comparing the two types of ETAS models, the one with the same fixed exponent coefficient (<span>alpha)</span> = 2.3 for both the productivity function and the scaling factor in the spatial response function (ETAS I), performs better in forecasting the active aftershock sequence than the model with different exponent coefficients (ETAS II), when the Poisson score is adopted. ETAS II performs better when a lower magnitude threshold of 2.0 and the binomial score are used. The reason is found to be that the catalog does not have an event of similar magnitude to the L’Aquila mainshock (M<sub>w</sub> 6.3) in the training period (April 16, 2005 to March 15, 2009), and the (<span>alpha)</span>-value is underestimated, thus the forecast seismicity is underestimated when the productivity function is extrapolated to high magnitudes. We also investigate the effect of the inclusion of small events in forecasting larger events. These results suggest that the training catalog used for estimating the model parameters should include earthquakes of magnitudes similar to the mainshock when forecasting seismicity during an aftershock sequence.</p></div></div></div>


Author(s):  
Hideo Aochi ◽  
Julie Maury ◽  
Thomas Le Guenan

Abstract The seismicity evolution in Oklahoma between 2010 and 2018 is analyzed systematically using an epidemic-type aftershock sequence model. To retrieve the nonstationary seismicity component, we systematically use a moving window of 200 events, each within a radius of 20 km at grid points spaced every 0.2°. Fifty-three areas in total are selected for our analysis. The evolution of the background seismicity rate μ is successfully retrieved toward its peak at the end of 2014 and during 2015, whereas the triggering parameter K is stable, slightly decreasing when the seismicity is activated. Consequently, the ratio of μ to the observed seismicity rate is not stationary. The acceleration of μ can be fit with an exponential equation relating μ to the normalized injected volume. After the peak, the attenuation phase can be fit with an exponential equation with time since peak as the independent variable. As a result, the evolution of induced seismicity can be followed statistically after it begins. The turning points, such as activation of the seismicity and timing of the peak, are difficult to identify solely from this statistical analysis and require a subsequent mechanical interpretation.


2017 ◽  
Vol 43 (4) ◽  
pp. 1994
Author(s):  
A.C. Astiopoulos ◽  
E. Papadimitriou ◽  
V. Karakostas ◽  
D. Gospodinov ◽  
G. Drakatos

The statistical properties of the aftershock occurrence are among the main issues in investigating the earthquake generation process. Seismicity rate changes during a seismic sequence, which are detected by the application of statistical models, are proved to be precursors of strong events occurring during the seismic excitation. Application of these models provides a tool in assessing the imminent seismic hazard, oftentimes by the estimation of the expected occurrence rate and comparison of the predicted rate with the observed one. The aim of this study is to examine the temporal distribution and especially the occurrence rate variations of aftershocks for two seismic sequences that took place, the first one near Skyros island in 2001 and the second one near Lefkada island in 2003, in order to detect and determine rate changes in connection with the evolution of the seismic activity. Analysis is performed through space–time stochastic models which are developed, based upon both aftershocks clustering studies and specific assumptions. The models applied are the Modified Omori Formula (MOF), the Epidemic Type Aftershock Sequence (ETAS) and the Restricted Epidemic Type Aftershock Sequence (RETAS). The modelling of seismicity rate changes, during the evolution of the particular seismic sequences, is then attempted in association with and as evidence of static stress changes


2012 ◽  
Vol 2 (1) ◽  
pp. 8 ◽  
Author(s):  
Jiancang Zhuang

Based on the ETAS (epidemic-type aftershock sequence) model, which is used for describing the features of short-term clustering of earthquake occurrence, this paper presents some theories and techniques related to evaluating the probability distribution of the maximum magnitude in a given space-time window, where the Gutenberg-Richter law for earthquake magnitude distribution cannot be directly applied. It is seen that the distribution of the maximum magnitude in a given space-time volume is determined in the longterm by the background seismicity rate and the magnitude distribution of the largest events in each earthquake cluster. The techniques introduced were applied to the seismicity in the Japan region in the period from 1926 to 2009. It was found that the regions most likely to have big earthquakes are along the Tohoku (northeastern Japan) Arc and the Kuril Arc, both with much higher probabilities than the offshore Nankai and Tokai regions.


2017 ◽  
Vol 50 (3) ◽  
pp. 1283
Author(s):  
K.A. Adamaki ◽  
R.G. Roberts

We investigate temporal changes in seismic activity observed in the West Corinth Gulf and North-West Peloponnese during 2008 to 2010. Two major earthquake sequences took place in the area at that time (in 2008 and 2010). Our aim is to analyse Greek seismicity to attempt to confirm the existence or non-existence of seismic precursors prior to the strongest earthquakes. Perhaps because the area is geologically and tectonically complex, we found that it was not possible to fit the data well using a consistent Epidemic Type Aftershock Sequence (ETAS) model. Nor could we unambiguously identify foreshocks to individual mainshocks. Therefore we sought patterns in aggregated foreshock catalogues. We set a magnitude threshold (M3.5) above which all the earthquakes detected in the study area are considered as “mainshocks”, and we combined all data preceding these into a single foreshock catalogue. This reveals an increase in seismicity rate not robustly observable for individual cases. The observed effect is significantly greater than that consistent with stochastic models, including ETAS, thus indicating genuine foreshock activity with potential useful precursory power, if sufficient data is available, i.e. if the magnitude of completeness is sufficiently low.


Author(s):  
Gordon J. Ross

ABSTRACT The epidemic-type aftershock sequence (ETAS) model is widely used in seismic forecasting. However, most studies of ETAS use point estimates for the model parameters, which ignores the inherent uncertainty that arises from estimating these from historical earthquake catalogs, resulting in misleadingly optimistic forecasts. In contrast, Bayesian statistics allows parameter uncertainty to be explicitly represented and fed into the forecast distribution. Despite its growing popularity in seismology, the application of Bayesian statistics to the ETAS model has been limited by the complex nature of the resulting posterior distribution, which makes it infeasible to apply to catalogs containing more than a few hundred earthquakes. To combat this, we develop a new framework for estimating the ETAS model in a fully Bayesian manner, which can be efficiently scaled up to large catalogs containing thousands of earthquakes. We also provide easy-to-use software that implements our method.


Author(s):  
Sebastian Hainzl

ABSTRACT The epidemic-type aftershock sequence (ETAS) model is a powerful statistical model to explain and forecast the spatiotemporal evolution of seismicity. However, its parameter estimation can be strongly biased by catalog deficiencies, particularly short-term incompleteness related to missing events in phases of high-seismic activity. Recent studies have shown that these short-term fluctuations of the completeness magnitude can be explained by the blindness of detection algorithms after earthquakes, preventing the detection of events with a smaller magnitude. Based on this assumption, I derive a direct relation between the true and detectable seismicity rate and magnitude distributions, respectively. These relations only include one additional parameter, the so-called blind time Tb, and lead to a closed-form maximum-likelihood formulation to estimate the ETAS parameters directly accounting for varying completeness. Tests using synthetic simulations show that the true parameters can be resolved from incomplete catalogs. Finally, I apply the new model to California’s most prominent mainshock–aftershock sequences in the last decades. The results show that the model leads to superior fits with Tb decreasing with time, indicating improved detection algorithms. The estimated parameters significantly differ from the estimation with the standard approach, indicating higher b-values and larger trigger potentials than previously thought.


Author(s):  
Michael Link ◽  
Zheng Qian

Abstract In recent years procedures for updating analytical model parameters have been developed by minimizing differences between analytical and preferably experimental modal analysis results. Provided that the initial analysis model contains parameters capable of describing possible damage these techniques could also be used for damage detection. In this case the parameters are updated using test data before and after the damage. Looking at complex structures with hundreds of parameters one generally has to measure the modal data at many locations and try to reduce the number of unknown parameters by some kind of localization technique because the measurement information is generally not sufficient to identify all the parameters equally distributed all over the structure. Another way of reducing the number of parameters shall be presented here. This method is based on the idea of measuring only a part of the structure and replacing the residual structure by dynamic boundary conditions which describe the dynamic stiffness at the interfaces between the measured main structure and the remaining unmeasured residual structure. This approach has some advantage since testing could be concentrated on critical areas where structural modifications are expected either due to damage or due to intended design changes. The dynamic boundary conditions are expressed in Craig-Bampton (CB) format by transforming the mass and stiffness matrices of the unmeasured residual structure to the interface degrees of freedom (DOF) and to the modal DOFs of the residual structure fixed at the interface. The dynamic boundary stiffness concentrates all physical parameters of the residual structure in only a few parameters which are open for updating. In this approach damage or modelling errors within the unmeasured residual structure are taken into account only in a global sense whereas the measured main structure is parametrized locally as usual by factoring mass and stiffness submatrices defining the type and the location of the physical parameters to be identified. The procedure was applied to identify the design parameters of a beam type frame structure with bolted joints using experimental modal data.


2020 ◽  
Vol 640 ◽  
pp. A37 ◽  
Author(s):  
A. Ignesti ◽  
G. Brunetti ◽  
M. Gitti ◽  
S. Giacintucci

Context. A large fraction of cool-core clusters are known to host diffuse, steep-spectrum radio sources, called radio mini-halos, in their cores. Mini-halos reveal the presence of relativistic particles on scales of hundreds of kiloparsecs, beyond the scales directly influenced by the central active galactic nucleus (AGN), but the nature of the mechanism that produces such a population of radio-emitting, relativistic electrons is still debated. It is also unclear to what extent the AGN plays a role in the formation of mini-halos by providing the seeds of the relativistic population. Aims. In this work we explore the connection between thermal and non-thermal components of the intra-cluster medium in a sample of radio mini-halos and we study the implications within the framework of a hadronic model for the origin of the emitting electrons. Methods. For the first time, we studied the thermal and non-thermal connection by carrying out a point-to-point comparison of the radio and the X-ray surface brightness in a sample of radio mini-halos. We extended the method generally applied to giant radio halos by considering the effects of a grid randomly generated through a Monte Carlo chain. Then we used the radio and X-ray correlation to constrain the physical parameters of a hadronic model and we compared the model predictions with current observations. Results. Contrary to what is generally reported in the literature for giant radio halos, we find that the mini-halos in our sample have super-linear scaling between radio and X-rays, which suggests a peaked distribution of relativistic electrons and magnetic field. We explore the consequences of our findings on models of mini-halos. We use the four mini-halos in the sample that have a roundish brightness distribution to constrain model parameters in the case of a hadronic origin of the mini-halos. Specifically, we focus on a model where cosmic rays are injected by the central AGN and they generate secondaries in the intra-cluster medium, and we assume that the role of turbulent re-acceleration is negligible. This simple model allows us to constrain the AGN cosmic ray luminosity in the range ∼1044−46 erg s−1 and the central magnetic field in the range 10–40 μG. The resulting γ-ray fluxes calculated assuming these model parameters do not violate the upper limits on γ-ray diffuse emission set by the Fermi-LAT telescope. Further studies are now required to explore the consistency of these large magnetic fields with Faraday rotation studies and to study the interplay between the secondary electrons and the intra-cluster medium turbulence.


Sign in / Sign up

Export Citation Format

Share Document