statistical seismology
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 4)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Robert Shcherbakov

<p>Earthquakes trigger subsequent earthquakes. They form clusters and swarms in space and in time. This is a direct manifestation of the non-Poisson behavior in the occurrence of earthquakes, where earthquake magnitudes and time intervals between successive events are not independent and are influenced by past seismicity. As a result, the distribution of the number of earthquakes is no longer strictly Poisson and the statistics of the largest events deviate from the GEV distribution. In statistical seismology, the occurrence of earthquakes is typically approximated by a stochastic marked point process. Among different models, the ETAS model is the most successful in reproducing several key aspects of seismicity. Recent analysis suggests that the ETAS model generates sequences of events which are not Poisson. This becomes important when the ETAS based models are used for earthquake forecasting (Shcherbakov et al., Nature Comms., 2019). In this work, I consider the Bayesian framework combined with the ETAS model to constrain the magnitudes of the largest expected aftershocks during a future forecasting time interval. This includes the MCMC sampling of the posterior distribution of the ETAS parameters and computation of the Bayesian predictive distribution for the magnitudes of the largest expected events. To validate the forecasts, the statistical tests developed by the CSEP are reformulated for the Bayesian framework. In addition, I define and compute the Bayesian p-value to evaluate the consistency of the forecasted extreme earthquakes during each forecasting time interval. The Bayesian p-value gives the probability that the largest forecasted earthquake can be more extreme than the observed one. The suggested approach is applied to the recent 2019 Ridgecrest earthquake sequence to forecast retrospectively the occurrence of the largest aftershocks (Shcherbakov, JGR, 2021). The results indicate that the Bayesian approach combined with the ETAS model outperformed the approach based on the Poisson assumption, which uses the extreme value distribution and the Omori law.</p>


2021 ◽  
Author(s):  
Álvaro González

<p>Statistical seismology relies on earthquake catalogs as homogeneous and complete as possible. However, heterogeneities in earthquake data compilation and reporting are common and frequently are not adverted.</p><p>The Global Centroid Moment Tensor Catalog (www.globalcmt.org) is considered as the most homogeneous global database for large and moderate earthquakes occurred since 1976, and it has been used for developing and testing global and regional forecast models.</p><p>Changes in the method used for calculating the moment tensors (along with improvements in global seismological monitoring) define four eras in the catalog (1976, 1977-1985, 1986-2003 and 2004-present). Improvements are particularly stark since 2004, when intermediate-period surface waves started to be used for calculating the centroid solutions.</p><p>Fixed centroid depths, used when the solution for a free depth did not converge, have followed diverse criteria, depending on the era. Depth had to be fixed mainly for shallow earthquakes, so this issue is more common, e.g. in the shallow parts of subduction zones than in the deep ones. Until 2003, 53% of the centroids had depths calculated as a free parameter, compared to 78% since 2004.</p><p>Rake values have not been calculated homogenously either. Until 2003, the vertical-dip-slip components of the moment tensor were assumed as null when they could not be constrained by the inversion (for 3.3% of the earthquakes). This caused an excess of pure focal mechanisms: rakes of -90° (normal), 0° or ±180° (strike-slip) or +90° (thrust). Even disregarding such events, rake histograms until 2003 and since 2004 are not equivalent to each other.</p><p>The magnitude of completeness (<em>M</em><sub>c</sub>) of the catalog is analyzed here separately for each era. It clearly improved along time (average <em>M</em><sub>c</sub> values being ~6.4 in 1976, ~5.7 in 1977-1985, ~5.4 in 1986-2003, and ~5.0 since 2004). Maps of <em>M</em><sub>c</sub> for different eras show significant spatial variations.</p>


Author(s):  
David A. Rhoades ◽  
Annemarie Christophersen ◽  
Sebastian Hainzl

Author(s):  
Marcus Herrmann ◽  
Warner Marzocchi

Abstract Earthquake catalogs describe the distribution of earthquakes in space, time, and magnitude, which is essential information for earthquake forecasting and the assessment of seismic hazard and risk. Available high-resolution (HR) catalogs raise the expectation that their abundance of small earthquakes will help better characterize the fundamental scaling laws of statistical seismology. Here, we investigate whether the ubiquitous exponential-like scaling relation for magnitudes (Gutenberg–Richter [GR], or its tapered version) can be straightforwardly extrapolated to the magnitude–frequency distribution (MFD) of HR catalogs. For several HR catalogs such as of the 2019 Ridgecrest sequence, the 2009 L’Aquila sequence, the 1992 Landers sequence, and entire southern California, we determine if the MFD agrees with an exponential-like distribution using a statistical goodness-of-fit test. We find that HR catalogs usually do not preserve the exponential-like MFD toward low magnitudes and depart from it. Surprisingly, HR catalogs that are based on advanced detection methods depart from an exponential-like MFD at a similar magnitude level as network-based HR catalogs. These departures are mostly due to an improper mixing of different magnitude types, spatiotemporal inhomogeneous completeness, or biased data recording or processing. Remarkably, common-practice methods to find the completeness magnitude do not recognize these departures and lead to severe bias in the b-value estimation. We conclude that extrapolating the exponential-like GR relation to lower magnitudes cannot be taken for granted, and that HR catalogs pose subtle new challenges and lurking pitfalls that may hamper their proper use. The simplest solution to preserve the exponential-like distribution toward low magnitudes may be to estimate a moment magnitude for each earthquake.


2020 ◽  
Vol 91 (4) ◽  
pp. 2330-2342 ◽  
Author(s):  
Arnaud Mignan ◽  
Marco Broccardo

Abstract In the last few years, deep learning has solved seemingly intractable problems, boosting the hope to find approximate solutions to problems that now are considered unsolvable. Earthquake prediction, the Grail of Seismology, is, in this context of continuous exciting discoveries, an obvious choice for deep learning exploration. We reviewed the literature of artificial neural network (ANN) applications for earthquake prediction (77 articles, 1994–2019 period) and found two emerging trends: an increasing interest in this domain over time and a complexification of ANN models toward deep learning. Despite the relatively positive results claimed in those studies, we verified that far simpler (and traditional) models seem to offer similar predictive powers, if not better ones. Those include an exponential law for magnitude prediction and a power law (approximated by a logistic regression or one artificial neuron) for aftershock prediction in space. Because of the structured, tabulated nature of earthquake catalogs, and the limited number of features so far considered, simpler and more transparent machine-learning models than ANNs seem preferable at the present stage of research. Those baseline models follow first physical principles and are consistent with the known empirical laws of statistical seismology (e.g., the Gutenberg–Richter law), which are already known to have minimal abilities to predict large earthquakes.


2020 ◽  
Author(s):  
Arnaud Mignan ◽  
Marco Broccardo

<p>In the last few years, deep learning has solved seemingly intractable problems, boosting the hope to find approximate solutions to problems that now are considered unsolvable. Earthquake prediction, the Grail of Seismology, is, in this context of continuous exciting discoveries, an obvious choice for deep learning exploration. We reviewed the literature of artificial neural network (ANN) applications for earthquake prediction (77 articles, 1994-2019 period) and found two emerging trends: an increasing interest in this domain over time, and a complexification of ANN models towards deep learning. Despite the relatively positive results claimed in those studies, we verified that far simpler (and traditional) models seem to offer similar predictive powers, if not better ones. Those include an exponential law for magnitude prediction, and a power law (approximated by a logistic regression or one artificial neuron) for aftershock prediction in space. Due to the structured, tabulated nature of earthquake catalogues, and the limited number of features so far considered, simpler and more transparent machine learning models than ANNs seem preferable at the present stage of research. Those baseline models follow first physical principles and are consistent with the known empirical laws of Statistical Seismology (e.g. the Gutenberg-Richter law), which are already known to have minimal abilities to predict large earthquakes.</p>


2019 ◽  
Vol 91 (1) ◽  
pp. 153-173 ◽  
Author(s):  
Andrew J. Michael ◽  
Sara K. McBride ◽  
Jeanne L. Hardebeck ◽  
Michael Barall ◽  
Eric Martinez ◽  
...  

Abstract The U.S. Geological Survey (USGS) has developed a national capability for aftershock forecasting after significant earthquakes. Use of this capability began in August 2018, and the 30 November 2018 Mw 7.1 Anchorage, Alaska, earthquake provided the first opportunity to apply this capability to a damaging earthquake in an urban area of the United States of America and observe how the forecast was discussed in the media. During this sequence, the forecasts were issued by a seismologist using interactive software that implements the Reasenberg and Jones (1989) model as updated in Page et al. (2016). The forecasts are communicated with a tiered template that provides basic information first before providing a more detailed numerical forecast and are posted on the mainshock’s event page on the USGS earthquake program. Experience from the Anchorage sequence showed that the process worked well, and the first forecast was issued only 54 min after the mainshock occurred. Updates over the coming days, weeks, and months adapted the forecast model from the initial generic parameters for the seismotectonic region to Bayesian and sequence‐specific models. Media reports accurately reported the forecast, demonstrating that the forecast template was successful except for a few reports that incorrectly merged the probability of one or more events in a given time–magnitude window with the likely range of the number of events. Changes to the template have been made to prevent that confusion in the future. We also released a special report on the possible duration of the sequence to assist in the federal disaster declaration and assistance process. Both our standard forecasts and this special report would benefit from more rapid determination of a sequence‐specific decay rate.


Author(s):  
David A. Rhoades ◽  
Annemarie Christophersen ◽  
Sebastian Hainzl

Author(s):  
Filippos Vallianatos ◽  
Georgios Michas ◽  
Giorgos Papadakis

Sign in / Sign up

Export Citation Format

Share Document