scholarly journals Spatial and Temporal Transferability of a Distributed Energy-Balance Glacier Melt Model

2011 ◽  
Vol 24 (5) ◽  
pp. 1480-1498 ◽  
Author(s):  
Andrew H. MacDougall ◽  
Gwenn E. Flowers

Abstract Modeling melt from glaciers is crucial to assessing regional hydrology and eustatic sea level rise. The transferability of such models in space and time has been widely assumed but rarely tested. To investigate melt model transferability, a distributed energy-balance melt model (DEBM) is applied to two small glaciers of opposing aspects that are 10 km apart in the Donjek Range of the St. Elias Mountains, Yukon Territory, Canada. An analysis is conducted in four stages to assess the transferability of the DEBM in space and time: 1) locally derived model parameter values and meteorological forcing variables are used to assess model skill; 2) model parameter values are transferred between glacier sites and between years of study; 3) measured meteorological forcing variables are transferred between glaciers using locally derived parameter values; 4) both model parameter values and measured meteorological forcing variables are transferred from one glacier site to the other, treating the second glacier site as an extension of the first. The model parameters are transferable in time to within a <10% uncertainty in the calculated surface ablation over most or all of a melt season. Transferring model parameters or meteorological forcing variables in space creates large errors in modeled ablation. If select quantities (ice albedo, initial snow depth, and summer snowfall) are retained at their locally measured values, model transferability can be improved to achieve ≤15% uncertainty in the calculated surface ablation.

2010 ◽  
Vol 4 (4) ◽  
pp. 2143-2167 ◽  
Author(s):  
A. H. MacDougall ◽  
B. A. Wheler ◽  
G. E. Flowers

Abstract. Transferability of glacier melt models is necessary for reliable projections of melt over large glacierized regions and over long time-scales. The transferability of such models has been examined for individual model types, but inter-comparison has been hindered by the diversity of validation statistics used to quantify transferability. We apply four common types of melt models – the classical degree-day model, an enhanced temperature-index model, a simplified energy-balance model and a full energy-balance model – to two glaciers in the same small mountain range. The transferability of each model is examined in space and over two melt seasons. We find that the full energy balance model is consistently the most transferable, with deviations in estimated glacier-wide surface ablation of ≤ 35% when the model is forced with parameters derived from the other glacier and/or melt season. The other three models have deviations in glacier-wide surface ablation of ≥ 100% under the same forcings. In addition, we find that there is no simple relationship between model complexity and model transferability.


Total hip metal arthroplasty (THA) model-parameters for a group of commonly used ones is optimized and numerically studied. Based on previous ceramic THA optimization software contributions, an improved multiobjective programming method/algorithm is implemented in wear modeling for THA. This computational nonlinear multifunctional optimization is performed with a number of THA metals with different hardnesses and erosion in vitro experimental rates. The new software was created/designed with two types of Sytems, Matlab and GNU Octave. Numerical results show be improved/acceptable for in vitro simulations. These findings are verified with 2D Graphical Optimization and 3D Interior Optimization methods, giving low residual-norms. The solutions for the model match mostly the literature in vitro standards for experimental simulations. Numerical figures for multifunctional optimization give acceptable model-parameter values with low residual-norms. Useful mathematical consequences/calculations are obtained for wear predictions, model advancements and simulation methodology. The wear magnitude for in vitro determinations with these model parameter data constitutes the advance of the method. In consequence, the erosion prediction for laboratory experimental testing in THA add up to the literature an efficacious usage-improvement. Results, additionally, are extrapolated to efficient Medical Physics applications and metal-THA Bioengineering designs.


1998 ◽  
Vol 14 (3) ◽  
pp. 276-291 ◽  
Author(s):  
James C. Martin ◽  
Douglas L. Milliken ◽  
John E. Cobb ◽  
Kevin L. McFadden ◽  
Andrew R. Coggan

This investigation sought to determine if cycling power could be accurately modeled. A mathematical model of cycling power was derived, and values for each model parameter were determined. A bicycle-mounted power measurement system was validated by comparison with a laboratory ergometer. Power was measured during road cycling, and the measured values were compared with the values predicted by the model. The measured values for power were highly correlated (R2= .97) with, and were not different than, the modeled values. The standard error between the modeled and measured power (2.7 W) was very small. The model was also used to estimate the effects of changes in several model parameters on cycling velocity. Over the range of parameter values evaluated, velocity varied linearly (R2> .99). The results demonstrated that cycling power can be accurately predicted by a mathematical model.


2009 ◽  
Vol 55 (190) ◽  
pp. 258-274 ◽  
Author(s):  
Marco Carenzo ◽  
Francesca Pellicciotti ◽  
Stefan Rimkus ◽  
Paolo Burlando

AbstractWe investigate the transferability of an enhanced temperature-index melt model that was developed and tested on Haut Glacier d’Arolla, Switzerland, in the 2001 season. The model’s empirical parameters (temperature factor, TF, and shortwave radiation factor, SRF) are recalibrated for: (1) other locations on Haut Glacier d’Arolla; (2) subperiods of distinct meteorological conditions; (3) different years on Haut Glacier d’Arolla; and (4) other glaciers in different years. The model parameters are optimized against simulations of an energy-balance model validated against ablation observations. Results are compared with those obtained with the original parameters. The model works very well when applied to other sites, seasons and glaciers, with the exception of overcast conditions. Differences are due to underestimation of high melt rates. The parameter values are associated with the prevailing energy-balance conditions, showing that high SRF are obtained on clear-sky days, whereas higher TF are typical of locations where glacier winds prevail and turbulent fluxes are high. We also provide a range of parameters clearly associated with the site’s location and its meteorological characteristics that could help to assign parameter values to sites where few data are available.


Standards ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 53-66
Author(s):  
Francisco Casesnoves

Total hip metal arthroplasty (THA) constitutes an important proportion of the standard clinical hip implant usage in Medical Physics and Biomedical Engineering. A computational nonlinear optimization is performed with two commonly metal materials in Metal-on-Metal (MoM) THA. Namely, Cast Co-Cr Alloy and Titanium. The principal result is the numerical determination of the K adimensional-constant parameter of the model. Results from a new more powerful algorithm than previous contributions, show significant improvements. Numerical standard figures for dual optimization give acceptable model-parameter values with low residuals. These results are demonstrated with 2D and 3D Graphical/Interior Optimization also. According to the findings/calculations, the standard optimized metal-model parameters are mathematically proven and verified. Mathematical consequences are obtained for model improvements and in vitro simulation methodology. The wear magnitude for in vitro determinations with these model parameter data constitute the innovation of the method. In consequence, the erosion prediction for laboratory experimental testing in THA adds valuable information to the literature. Applications lead to medical physics improvements for material/metal-THA designs.


2021 ◽  
Author(s):  
Baki Harish ◽  
Sandeep Chinta ◽  
Chakravarthy Balaji ◽  
Balaji Srinivasan

<p>The Indian subcontinent is prone to tropical cyclones that originate in the Indian Ocean and cause widespread destruction to life and property. Accurate prediction of cyclone track, landfall, wind, and precipitation are critical in minimizing damage. The Weather Research and Forecast (WRF) model is widely used to predict tropical cyclones. The accuracy of the model prediction depends on initial conditions, physics schemes, and model parameters. The parameter values are selected empirically by scheme developers using the trial and error method, implying that the parameter values are sensitive to climatological conditions and regions. The number of tunable parameters in the WRF model is about several hundred, and calibrating all of them is highly impossible since it requires thousands of simulations. Therefore, sensitivity analysis is critical to screen out the parameters that significantly impact the meteorological variables. The Sobol’ sensitivity analysis method is used to identify the sensitive WRF model parameters. As this method requires a considerable amount of samples to evaluate the sensitivity adequately, machine learning algorithms are used to construct surrogate models trained using a limited number of samples. They could help generate a vast number of required pseudo-samples. Five machine learning algorithms, namely, Gaussian Process Regression (GPR), Support Vector Machine, Regression Tree, Random Forest, and K-Nearest Neighbor, are considered in this study. Ten-fold cross-validation is used to evaluate the surrogate models constructed using the five algorithms and identify the robust surrogate model among them. The samples generated from this surrogate model are then used by the Sobol’ method to evaluate the WRF model parameter sensitivity.</p>


2013 ◽  
Vol 16 (2) ◽  
pp. 392-406 ◽  
Author(s):  
Gift Dumedah ◽  
Paulin Coulibaly

Data assimilation has allowed hydrologists to account for imperfections in observations and uncertainties in model estimates. Typically, updated members are determined as a compromised merger between observations and model predictions. The merging procedure is conducted in decision space before model parameters are updated to reflect the assimilation. However, given the dynamics between states and model parameters, there is limited guarantee that when updated parameters are applied into measurement models, the resulting estimate will be the same as the updated estimate. To account for these challenges, this study uses evolutionary data assimilation (EDA) to estimate streamflow in gauged and ungauged watersheds. EDA assimilates daily streamflow into a Sacramento soil moisture accounting model to determine updated members for eight watersheds in southern Ontario, Canada. The updated members are combined to estimate streamflow in ungauged watersheds where the results show high estimation accuracy for gauged and ungauged watersheds. An evaluation of the commonalities in model parameter values across and between gauged and ungauged watersheds underscore the critical contributions of consistent model parameter values. The findings show a high degree of commonality in model parameter values such that members of a given gauged/ungauged watershed can be estimated using members from another watershed.


2017 ◽  
Vol 21 (11) ◽  
pp. 5663-5679 ◽  
Author(s):  
Björn Guse ◽  
Matthias Pfannerstill ◽  
Abror Gafurov ◽  
Jens Kiesel ◽  
Christian Lehr ◽  
...  

Abstract. In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE), Kling–Gupta efficiency (KGE) and its three components (alpha, beta and r) as well as RSR (the ratio of the root mean square error to the standard deviation) for different segments of the flow duration curve (FDC) are calculated. With a joint analysis of two regression tree (RT) approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria is found for low- and mid-flow conditions. Moreover, the RT analyses emphasise the benefit of an individual analysis of the three components of KGE and of the FDC segments. Furthermore, the RT analyses highlight under which conditions these performance criteria provide insights into precise parameter identification. Our results show that separate performance criteria are required to identify dominant parameters on low- and mid-flow conditions, whilst the number of required performance criteria for high flows increases with increasing process complexity in the catchment. Overall, the analysis of the connective strength between model parameters and performance criteria using RTs contribute to a more realistic handling of parameters and performance criteria in hydrological modelling.


Dependability ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 9-17 ◽  
Author(s):  
A. V. Antonov ◽  
V. A. Chepurko ◽  
A. N. Cherniaev

Aim. Common cause failures (CCFs) are dependent failures of groups of certain elements that occur simultaneously or within a short period of time (i.e. almost simultaneously) due to a single common cause (e.g. a sudden change of climatic operating conditions, flooding of premises, etc.). A dependent failure is a multiple failure of several elements of a system, whose probability cannot be expressed as a simple product of the probabilities of unconditional failures of individual elements. ССА probabilities calculation uses a number of common models, i.e. the Greek letter model, alpha, beta factor and their variants. The beta-factor model is the most simple in terms of simulation of dependent failures and further dependability calculations. Other models, when used in simulation, involve combinatorial enumeration of dependent events in a group of n events that becomes labour-intensive if the number n is high. For the selected structure diagrams of dependability, the paper analyzes the calculation method of system failure probability with CCF taken into account for the beta-factor model. The Aim of the paper is to thoroughly analyze the beta-factor method for three structure diagrams of dependability, research the effects of the model parameters on the final result, find the limitations of beta-factor model applicability. Methods. The calculations were performed using numerical methods of solution of equations, analytical methods of function studies. Conclusions. The paper features an in-depth study of the method of undependability calculation for three structure diagrams that accounts for CCF and uses the beta-factor model. In the first example, for the selected structure diagram out of n parallel elements with identical dependability, it is analytically shown that accounting for CCF does not necessarily cause increased undependability. In the second example of primary junction of n elements with identical dependability, it is shown that accounting for CCF subject to parameter values causes both increased and decreased undependability. A number of beta factor model parameter values was identified that cause unacceptable values of system failure probability. These sets of values correspond to relatively high model parameter values and are hardly practically attainable as part of engineering of real systems with highly dependable components. In the third example, the conventional bridge diagram with two groups of CCFs is considered. The complex ambivalent effect of beta factor model parameters on the probability of failure is shown. As in the second example, limitations of the applicability of the beta-factor model are identified.


2018 ◽  
Vol 20 (1) ◽  
pp. 33
Author(s):  
A. Mirzayeva ◽  
N.A. Slavinskaya ◽  
M. Abbasi ◽  
J.H. Starcke ◽  
W. Li ◽  
...  

A module of PrIMe automated data-centric infrastructure, Bound-to-Bound Data Collaboration (B2BDC), was used for the analysis of systematic uncertainty and data consistency of the H2/CO reaction model (73/17). In order to achieve this purpose, a dataset of 167 experimental targets (ignition delay time and laminar flame speed) and 55 active model parameters (pre-exponent factors in the Arrhenius form of the reaction rate coefficients) was constructed. Consistency analysis of experimental data from the composed dataset revealed disagreement between models and data. Two consistency measures were applied to identify the quality of experimental targets (Quantities of Interest, QoI): scalar consistency measure, which quantifies the tightening index of the constraints while still ensuring the existence of a set of the model parameter values whose associated modeling output predicts the experimental QoIs within the uncertainty bounds; and a newly-developed method of computing the vector consistency measure (VCM), which determines the minimal bound changes for QoIs initially identified as inconsistent, each bound by its own extent, while still ensuring the existence of a set of the model parameter values whose associated modeling output predicts the experimental QoIs within the uncertainty bounds. The consistency analysis suggested that elimination of 45 experimental targets, 8 of which were self- inconsistent, would lead to a consistent dataset. After that the feasible parameter set was constructed through decrease uncertainty parameters for several reaction rate coefficients. This dataset was subjected for the B2BDC framework model optimization and analysis on. Forth methods of parameter optimization were applied, including those unique in the B2BDC framework. The optimized models showed improved agreement with experimental values, as compared to the initially-assembled model. Moreover, predictions for experiments not included in the initial dataset were investigated. The results demonstrate benefits of applying the B2BDC methodology for development of predictive kinetic models.


Sign in / Sign up

Export Citation Format

Share Document