scholarly journals Event-scale power law recession analysis: Quantifying methodological uncertainty

2016 ◽  
Author(s):  
David N. Dralle ◽  
Nathaniel J. Karst ◽  
Kyriakos Charalampous ◽  
Sally E. Thompson

Abstract. The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power-law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely-used power-law recession model. We show that: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness-of-fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices.

2017 ◽  
Vol 21 (1) ◽  
pp. 65-81 ◽  
Author(s):  
David N. Dralle ◽  
Nathaniel J. Karst ◽  
Kyriakos Charalampous ◽  
Andrew Veenstra ◽  
Sally E. Thompson

Abstract. The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.


2002 ◽  
Vol 6 (5) ◽  
pp. 883-898 ◽  
Author(s):  
K. Engeland ◽  
L. Gottschalk

Abstract. This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC) analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1) process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis


2011 ◽  
Vol 2011 ◽  
pp. 1-6 ◽  
Author(s):  
Ibrahim Suliman Hanaish ◽  
Kamarulzaman Ibrahim ◽  
Abdul Aziz Jemain

Three versions of Bartlett Lewis rectangular pulse rainfall models, namely, the Original Bartlett Lewis (OBL), Modified Bartlett Lewis (MBL), and 2N-cell-type Bartlett Lewis model (BL2n), are considered. These models are fitted to the hourly rainfall data from 1970 to 2008 obtained from Petaling Jaya rain gauge station, located in Peninsular Malaysia. The generalized method of moments is used to estimate the model parameters. Under this method, minimization of two different objective functions which involve different weight functions, one weight is inversely proportional to the variance and another one is inversely proportional to the mean squared, is carried out using Nelder-Mead optimization technique. For the purpose of comparison of the performance of the three different models, the results found for the months of July and November are used for illustration. This performance is assessed based on the goodness of fit of the models. In addition, the sensitivity of the parameter estimates to the choice of the objective function is also investigated. It is found thatBL2nslightly outperformsOBL. However, the best model is the Modified Bartlett LewisMBL, particularly when the objective function considered involves weight which is inversely proportional to the variance.


2018 ◽  
Vol 11 (8) ◽  
pp. 3313-3325 ◽  
Author(s):  
Alex G. Libardoni ◽  
Chris E. Forest ◽  
Andrei P. Sokolov ◽  
Erwan Monier

Abstract. For over 20 years, the Massachusetts Institute of Technology Earth System Model (MESM) has been used extensively for climate change research. The model is under continuous development with components being added and updated. To provide transparency in the model development, we perform a baseline evaluation by comparing model behavior and properties in the newest version to the previous model version. In particular, changes resulting from updates to the land surface model component and the input forcings used in historical simulations of climate change are investigated. We run an 1800-member ensemble of MESM historical climate simulations where the model parameters that set climate sensitivity, the rate of ocean heat uptake, and the net anthropogenic aerosol forcing are systematically varied. By comparing model output to observed patterns of surface temperature changes and the linear trend in the increase in ocean heat content, we derive probability distributions for the three model parameters. Furthermore, we run a 372-member ensemble of transient climate simulations where all model forcings are fixed and carbon dioxide concentrations are increased at the rate of 1 % year−1. From these runs, we derive response surfaces for transient climate response and thermosteric sea level rise as a function of climate sensitivity and ocean heat uptake. We show that the probability distributions shift towards higher climate sensitivities and weaker aerosol forcing when using the new model and that the climate response surfaces are relatively unchanged between model versions. Because the response surfaces are independent of the changes to the model forcings and similar between model versions with different land surface models, we suggest that the change in land surface model has limited impact on the temperature evolution in the model. Thus, we attribute the shifts in parameter estimates to the updated model forcings.


2011 ◽  
Vol 29 ◽  
pp. 51-59 ◽  
Author(s):  
L. Zhao ◽  
Q. Duan ◽  
J. Schaake ◽  
A. Ye ◽  
J. Xia

Abstract. This paper evaluates the performance of a statistical post-processor for imperfect hydrologic model forecasts. Assuming that the meteorological forecasts are well-calibrated, we employ a "General Linear Model (GLM)" to post-process simulations produced by a hydrologic model. For a particular forecast date, the observations and simulations from an "analysis window" and hydrologic model forecasts for a "forecast window", the GLM Post-Processor (GLMPP) is used to produce an ensemble of predictions of the streamflow observations that will occur during the "forecast window". The objectives of the GLMPP are to: (1) preserve any skill in the original hydrologic ensemble forecast; (2) correct systematic model biases; (3) retain the equal-likelihood assumption for the ensemble; (4) preserve temporal scale dependency relationships in streamflow hydrographs and the uncertainty in the predictions; and, (5) produce reliable ensemble predictions. Observed and simulated daily streamflow data from the Second Workshop on Model Parameter Estimation Experiment (MOPEX) are used to test how well these objectives are met when the GLMPP is applied to ensemble hydrologic forecasts driven by well calibrated meteorological forecasts. A 39-year hydrologic dataset from the French Broad basin is split into calibration and verification periods. The results show that the GLMPP built using data from the calibration period removes the mean bias when applied to hydrologic model simulations from both the calibration and verification periods. Probability distributions of the post-processed model simulations are shown to be closer to the climatological probability distributions of observed streamflow than the distributions of the unadjusted simulated flows. A number of experiments with different GLMPP configurations were also conducted to examine the effects of different configurations for forecast and analysis window lengths on the robustness of the results.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 502
Author(s):  
Hohyun Jung ◽  
Frederick Kin Hing Phoa

The degree distribution has attracted considerable attention from network scientists in the last few decades to have knowledge of the topological structure of networks. It is widely acknowledged that many real networks have power-law degree distributions. However, the deviation from such a behavior often appears when the range of degrees is small. Even worse, the conventional employment of the continuous power-law distribution usually causes an inaccurate inference as the degree should be discrete-valued. To remedy these obstacles, we propose a finite mixture model of truncated zeta distributions for a broad range of degrees that disobeys a power-law behavior in the range of small degrees while maintaining the scale-free behavior. The maximum likelihood algorithm alongside the model selection method is presented to estimate model parameters and the number of mixture components. The validity of the suggested algorithm is evidenced by Monte Carlo simulations. We apply our method to five disciplines of scientific collaboration networks with remarkable interpretations. The proposed model outperforms the other alternatives in terms of the goodness-of-fit.


2021 ◽  
Author(s):  
Y. Curtis Wang ◽  
Nirvik Sinha ◽  
Johann Rudi ◽  
James Velasco ◽  
Gideon Idumah ◽  
...  

Experimental data-based parameter search for Hodgkin-Huxley-style (HH) neuron models is a major challenge for neuroscientists and neuroengineers. Current search strategies are often computationally expensive, are slow to converge, have difficulty handling nonlinearities or multimodalities in the objective function, or require good initial parameter guesses. Most important, many existing approaches lack quantification of uncertainties in parameter estimates even though such uncertainties are of immense biological significance. We propose a novel method for parameter inference and uncertainty quantification in a Bayesian framework using the Markov chain Monte Carlo (MCMC) approach. This approach incorporates prior knowledge about model parameters (as probability distributions) and aims to map the prior to a posterior distribution of parameters informed by both the model and the data. Furthermore, using the adaptive parallel tempering strategy for MCMC, we tackle the highly nonlinear, noisy, and multimodal loss function, which depends on the HH neuron model. We tested the robustness of our approach using the voltage trace data generated from a 9-parameter HH model using five levels of injected currents (0.0, 0.1, 0.2, 0.3, and 0.4 nA). Each test consisted of running the ground truth with its respective currents to estimate the model parameters. To simulate the condition for fitting a frequency-current (F-I) curve, we also introduced an aggregate objective that runs MCMC against all five levels simultaneously. We found that MCMC was able to produce many solutions with acceptable loss values (e.g., for 0.0 nA, 889 solutions were within 0.5% of the best solution and 1,595 solutions within 1% of the best solution). Thus, an adaptive parallel tempering MCMC search provides a "landscape" of the possible parameter sets with acceptable loss values in a tractable manner. Our approach is able to obtain an intelligently sampled global view of the solution distributions within a search range in a single computation. Additionally, the advantage of uncertainty quantification allows for exploration of further solution spaces, which can serve to better inform future experiments.


2021 ◽  
Author(s):  
Srinivasa Murthy D ◽  
Aruna Jyothy S ◽  
Mallikarjuna P

Abstract The study aims at the probabilistic analysis of annual maximum daily streamflows at the gauging sites of Godavari upper, Godavari middle, Pranahitha, Indravathi and Godavari lower sub-basins. The daily streamflow data at Chass, Ashwi and Pachegaon of Godavari upper, Manjalegaon, Dhalegaon, Zari, GR Bridge, Purna and Yelli of Godavari middle, Gandlapet, Mancherial, Somanpally and Perur of Pranahitha, Pathagudem, Chindnar, Sonarpal, Jagdalpur and Nowrangpur of Indravathi, and, Sardaput, Injaram, Konta, Koida and Polavaram of Godavari lower sub-basins for the period varying between 1965–2011, collected from Central Water Commission (CWC), India were used in the analysis. Statistics of annual maximum daily streamflow series during the study period at the gauging sites of sub-basins indicated moderately variedand positively skewed streamflows, and flows with sharp peaks at the upstream gauging sites. Probabilistic analysis of streamflows showed that lognormal or gamma distribution with conventional moments fitted the maximum daily streamflow data at the gauging sites of Godavari sub-basins.Among 2-parameter distributions with L-moments,GPA2 followed by GAM2/LN2 fitted annual maximum daily streamflow data at most of the gauging sites.At the downstream-most gauging sites of Pranahitha, Indravathi and Godavari lower sub-basins, the data followed W2 probability distribution. Among 3-parameter distributions with L-moments, GPA3 at seven gauging sites, W3 and P3 at five gauging sites each, GLOG at four gauging sites and GEV at two gauging sites fitted the data. Based on the performance evaluation, 2 – parameter distributions using L-moments at the upstream, 3 – parameter distributions at the middle and probability distributions using conventional moments at the downstreamgauging sites performed better in the Godavari upper and middle sub-basins. Probability distributions based on conventional moments/ 3-parameter distributions using L-momentsfitted the annual maximum daily streamflow data at the gauging sites in the Pranahitha, Indravathi and Godavari lower sub-basins satisfactorily.


2018 ◽  
Vol 140 (7) ◽  
Author(s):  
Ahmed Ramadan ◽  
Connor Boss ◽  
Jongeun Choi ◽  
N. Peter Reeves ◽  
Jacek Cholewicki ◽  
...  

Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.


Sign in / Sign up

Export Citation Format

Share Document