scholarly journals Variability of ETAS Parameters in Global Subduction Zones and Applications to Mainshock–Aftershock Hazard Assessment

2020 ◽  
Vol 110 (1) ◽  
pp. 191-212 ◽  
Author(s):  
Lizhong Zhang ◽  
Maximilian J. Werner ◽  
Katsuichiro Goda

ABSTRACT Megathrust earthquake sequences can impact buildings and infrastructure due to not only the mainshock but also the triggered aftershocks along the subduction interface and in the overriding crust. To give realistic ranges of aftershock simulations in regions with limited data and to provide time-dependent seismic hazard information right after a future giant shock, we assess the variability of the epidemic-type aftershock sequence (ETAS) model parameters in subduction zones that have experienced M≥7.5 earthquakes, comparing estimates from long time windows with those from individual sequences. Our results show that the ETAS parameters are more robust if estimated from a long catalog than from individual sequences, given individual sequences have fewer data including missing early aftershocks. Considering known biases of the parameters (due to model formulation, the isotropic spatial aftershock distribution, and finite size effects of catalogs), we conclude that the variability of the ETAS parameters that we observe from robust estimates is not significant, neither across different subduction-zone regions nor as a function of maximum observed magnitudes. We also find that ETAS parameters do not change when multiple M 8.0–9.0 events are included in a region, mainly because an M 9.0 sequence dominates the number of events in the catalog. Based on the ETAS parameter estimates in the long time period window, we propose a set of ETAS parameters for future M 9.0 sequences for aftershock hazard assessment (K0=0.04±0.02, α=2.3, c=0.03±0.01, p=1.21±0.08, γ=1.61±0.29, d=23.48±18.17, and q=1.68±0.55). Synthetic catalogs created with the suggested ETAS parameters show good agreement with three observed M 9.0 sequences since 1965 (the 2004 M 9.1 Aceh–Andaman earthquake, the 2010 M 8.8 Maule earthquake, and the 2011 M 9.0 Tohoku earthquake).

2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


2011 ◽  
Vol 64 (S1) ◽  
pp. S3-S18 ◽  
Author(s):  
Yuanxi Yang ◽  
Jinlong Li ◽  
Junyi Xu ◽  
Jing Tang

Integrated navigation using multiple Global Navigation Satellite Systems (GNSS) is beneficial to increase the number of observable satellites, alleviate the effects of systematic errors and improve the accuracy of positioning, navigation and timing (PNT). When multiple constellations and multiple frequency measurements are employed, the functional and stochastic models as well as the estimation principle for PNT may be different. Therefore, the commonly used definition of “dilution of precision (DOP)” based on the least squares (LS) estimation and unified functional and stochastic models will be not applicable anymore. In this paper, three types of generalised DOPs are defined. The first type of generalised DOP is based on the error influence function (IF) of pseudo-ranges that reflects the geometry strength of the measurements, error magnitude and the estimation risk criteria. When the least squares estimation is used, the first type of generalised DOP is identical to the one commonly used. In order to define the first type of generalised DOP, an IF of signal–in-space (SIS) errors on the parameter estimates of PNT is derived. The second type of generalised DOP is defined based on the functional model with additional systematic parameters induced by the compatibility and interoperability problems among different GNSS systems. The third type of generalised DOP is defined based on Bayesian estimation in which the a priori information of the model parameters is taken into account. This is suitable for evaluating the precision of kinematic positioning or navigation. Different types of generalised DOPs are suitable for different PNT scenarios and an example for the calculation of these DOPs for multi-GNSS systems including GPS, GLONASS, Compass and Galileo is given. New observation equations of Compass and GLONASS that may contain additional parameters for interoperability are specifically investigated. It shows that if the interoperability of multi-GNSS is not fulfilled, the increased number of satellites will not significantly reduce the generalised DOP value. Furthermore, the outlying measurements will not change the original DOP, but will change the first type of generalised DOP which includes a robust error IF. A priori information of the model parameters will also reduce the DOP.


Author(s):  
Arnaud Dufays ◽  
Elysee Aristide Houndetoungan ◽  
Alain Coën

Abstract Change-point (CP) processes are one flexible approach to model long time series. We propose a method to uncover which model parameters truly vary when a CP is detected. Given a set of breakpoints, we use a penalized likelihood approach to select the best set of parameters that changes over time and we prove that the penalty function leads to a consistent selection of the true model. Estimation is carried out via the deterministic annealing expectation-maximization algorithm. Our method accounts for model selection uncertainty and associates a probability to all the possible time-varying parameter specifications. Monte Carlo simulations highlight that the method works well for many time series models including heteroskedastic processes. For a sample of fourteen hedge fund (HF) strategies, using an asset-based style pricing model, we shed light on the promising ability of our method to detect the time-varying dynamics of risk exposures as well as to forecast HF returns.


1981 ◽  
Vol 240 (5) ◽  
pp. R259-R265 ◽  
Author(s):  
J. J. DiStefano

Design of optimal blood sampling protocols for kinetic experiments is discussed and evaluated, with the aid of several examples--including an endocrine system case study. The criterion of optimality is maximum accuracy of kinetic model parameter estimates. A simple example illustrates why a sequential experiment approach is required; optimal designs depend on the true model parameter values, knowledge of which is usually a primary objective of the experiment, as well as the structure of the model and the measurement error (e.g., assay) variance. The methodology is evaluated from the results of a series of experiments designed to quantify the dynamics of distribution and metabolism of three iodothyronines, T3, T4, and reverse-T3. This analysis indicates that 1) the sequential optimal experiment approach can be effective and efficient in the laboratory, 2) it works in the presence of reasonably controlled biological variation, producing sufficiently robust sampling protocols, and 3) optimal designs can be highly efficient designs in practice, requiring for maximum accuracy a number of blood samples equal to the number of independently adjustable model parameters, no more or less.


2014 ◽  
Vol 11 (1) ◽  
pp. 1253-1300 ◽  
Author(s):  
Z. He ◽  
F. Tian ◽  
H. C. Hu ◽  
H. V. Gupta ◽  
H. P. Hu

Abstract. Hydrological modeling depends on single- or multiple-objective strategies for parameter calibration using long time sequences of observed streamflow. Here, we demonstrate a diagnostic approach to the calibration of a hydrological model of an alpine area in which we partition the hydrograph based on the dominant runoff generation mechanism (groundwater baseflow, glacier melt, snowmelt, and direct runoff). The partitioning reflects the spatiotemporal variability in snowpack, glaciers, and temperature. Model parameters are grouped by runoff generation mechanism, and each group is calibrated separately via a stepwise approach. This strategy helps to reduce the problem of equifinality and, hence, model uncertainty. We demonstrate the method for the Tailan River basin (1324 km2) in the Tianshan Mountains of China with the help of a semi-distributed hydrological model (THREW).


2015 ◽  
Vol 57 (6) ◽  
Author(s):  
Maura Murru ◽  
Jiancang Zhuang ◽  
Rodolfo Console ◽  
Giuseppe Falcone

<div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p>In this paper, we compare the forecasting performance of several statistical models, which are used to describe the occurrence process of earthquakes in forecasting the short-term earthquake probabilities during the L’Aquila earthquake sequence in central Italy in 2009. These models include the Proximity to Past Earthquakes (PPE) model and two versions of the Epidemic Type Aftershock Sequence (ETAS) model. We used the information gains corresponding to the Poisson and binomial scores to evaluate the performance of these models. It is shown that both ETAS models work better than the PPE model. However, in comparing the two types of ETAS models, the one with the same fixed exponent coefficient (<span>alpha)</span> = 2.3 for both the productivity function and the scaling factor in the spatial response function (ETAS I), performs better in forecasting the active aftershock sequence than the model with different exponent coefficients (ETAS II), when the Poisson score is adopted. ETAS II performs better when a lower magnitude threshold of 2.0 and the binomial score are used. The reason is found to be that the catalog does not have an event of similar magnitude to the L’Aquila mainshock (M<sub>w</sub> 6.3) in the training period (April 16, 2005 to March 15, 2009), and the (<span>alpha)</span>-value is underestimated, thus the forecast seismicity is underestimated when the productivity function is extrapolated to high magnitudes. We also investigate the effect of the inclusion of small events in forecasting larger events. These results suggest that the training catalog used for estimating the model parameters should include earthquakes of magnitudes similar to the mainshock when forecasting seismicity during an aftershock sequence.</p></div></div></div>


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yang Liu ◽  
Penghao Wang ◽  
Melissa L. Thomas ◽  
Dan Zheng ◽  
Simon J. McKirdy

AbstractInvasive species can lead to community-level damage to the invaded ecosystem and extinction of native species. Most surveillance systems for the detection of invasive species are developed based on expert assessment, inherently coming with a level of uncertainty. In this research, info-gap decision theory (IGDT) is applied to model and manage such uncertainty. Surveillance of the Asian House Gecko, Hemidactylus frenatus Duméril and Bibron, 1836 on Barrow Island, is used as a case study. Our research provides a novel method for applying IGDT to determine the population threshold ($$K$$ K ) so that the decision can be robust to the deep uncertainty present in model parameters. We further robust-optimize surveillance costs rather than minimize surveillance costs. We demonstrate that increasing the population threshold for detection increases both robustness to the errors in the model parameter estimates, and opportuneness to lower surveillance costs than the accepted maximum budget. This paper provides guidance for decision makers to balance robustness and required surveillance expenditure. IGDT offers a novel method to model and manage the uncertainty prevalent in biodiversity conservation practices and modelling. The method outlined here can be used to design robust surveillance systems for invasive species in a wider context, and to better tackle uncertainty in protection of biodiversity and native species in a cost-effective manner.


2020 ◽  
Vol 17 (173) ◽  
pp. 20200886
Author(s):  
L. Mihaela Paun ◽  
Mitchel J. Colebank ◽  
Mette S. Olufsen ◽  
Nicholas A. Hill ◽  
Dirk Husmeier

This study uses Bayesian inference to quantify the uncertainty of model parameters and haemodynamic predictions in a one-dimensional pulmonary circulation model based on an integration of mouse haemodynamic and micro-computed tomography imaging data. We emphasize an often neglected, though important source of uncertainty: in the mathematical model form due to the discrepancy between the model and the reality, and in the measurements due to the wrong noise model (jointly called ‘model mismatch’). We demonstrate that minimizing the mean squared error between the measured and the predicted data (the conventional method) in the presence of model mismatch leads to biased and overly confident parameter estimates and haemodynamic predictions. We show that our proposed method allowing for model mismatch, which we represent with Gaussian processes, corrects the bias. Additionally, we compare a linear and a nonlinear wall model, as well as models with different vessel stiffness relations. We use formal model selection analysis based on the Watanabe Akaike information criterion to select the model that best predicts the pulmonary haemodynamics. Results show that the nonlinear pressure–area relationship with stiffness dependent on the unstressed radius predicts best the data measured in a control mouse.


Sign in / Sign up

Export Citation Format

Share Document