scholarly journals Effects of wildfire on catchment runoff response: a modelling approach to detect changes in snow-dominated forested catchments

2010 ◽  
Vol 41 (5) ◽  
pp. 378-390 ◽  
Author(s):  
Jan Seibert ◽  
Jeffrey J. McDonnell ◽  
Richard D. Woodsmith

Wildfire is an important disturbance affecting hydrological processes through alteration of vegetation cover and soil characteristics. The effects of fire on hydrological systems at the catchment scale are not well known, largely because site specific data from both before and after wildfire are rare. In this study a modelling approach was employed for change detection analyses of one such dataset to quantify effects of wildfire on catchment hydrology. Data from the Entiat Experimental Forest (Washington State, US) were used, a conceptual runoff model was applied for pre- and post-fire periods and changes were analyzed in three different ways: reconstruction of runoff series, comparison of model parameters and comparison of simulations using parameter sets calibrated to the two different periods. On average, observed post-fire peak flows were 120% higher than those modelled based on pre-fire conditions. For the post-fire period, parameter values for the snow routine indicated deeper snow packs and earlier and more rapid snowmelt. The net effect of the changes in all parameters was largely increased post-fire peak flows. Overall, the analyses show that change detection modelling provides a viable alternative to the paired-watershed approach for analyzing wildfire disturbance effects on runoff dynamics and supports discussions on changes in hydrological processes.

2013 ◽  
Vol 10 (12) ◽  
pp. 15375-15408 ◽  
Author(s):  
O. Munyaneza ◽  
A. Mukubwa ◽  
S. Maskey ◽  
J. Wenninger ◽  
S. Uhlenbrook

Abstract. In the last couple of years, different hydrological research projects were undertaken in the Migina catchment (243.2 km2), a tributary of the Kagera river in Southern Rwanda. These projects were aimed to understand hydrological processes of the catchment using analytical and experimental approaches and to build a pilot case whose experience can be extended to other catchments in Rwanda. In the present study, we developed a hydrological model of the catchment, which can be used to inform water resources planning and decision making. The semi-distributed hydrological model HEC-HMS (version 3.5) was used with its soil moisture accounting, unit hydrograph, liner reservoir (for base flow) and Muskingum-Cunge (river routing) methods. We used rainfall data from 12 stations and streamflow data from 5 stations, which were collected as part of this study over a period of two years (May 2009 and June 2011). The catchment was divided into five sub-catchments each represented by one of the five observed streamflow gauges. The model parameters were calibrated separately for each sub-catchment using the observed streamflow data. Calibration results obtained were found acceptable at four stations with a Nash–Sutcliffe Model Efficiency of 0.65 on daily runoff at the catchment outlet. Due to the lack of sufficient and reliable data for longer periods, a model validation (split sample test) was not undertaken. However, we used results from tracer based hydrograph separation from a previous study to compare our model results in terms of the runoff components. It was shown that the model performed well in simulating the total flow volume, peak flow and timing as well as the portion of direct runoff and base flow. We observed considerable disparities in the parameters (e.g. groundwater storage) and runoff components across the five sub-catchments, that provided insights into the different hydrological processes at sub-catchment scale. We conclude that such disparities justify the need to consider catchment subdivisions, if such parameters and components of the water cycle are to form the base for decision making in water resources planning in the Migina catchment.


2011 ◽  
Vol 8 (3) ◽  
pp. 4583-4640 ◽  
Author(s):  
G. Carrillo ◽  
P. A. Troch ◽  
M. Sivapalan ◽  
T. Wagener ◽  
C. Harman ◽  
...  

Abstract. Catchment classification is an efficient method to synthesize our understanding of how climate variability and catchment characteristics interact to define hydrological response. One way to accomplish catchment classification is to empirically relate climate and catchment characteristics to hydrologic behavior and to quantify the skill of predicting hydrologic response based on the combination of climate and catchment characteristics. Since there are important subsurface properties that cannot be readily measured, the skill of classification reflects (the lack of) the amount of cross-correlation between observable landscape features and unobservable subsurface features. The resulting empirical approach is also strongly controlled by the dataset used, and therefore lacks the power to generalize beyond the heterogeneity of characteristics found in the dataset. An alternative approach, that can partially alleviate the above-mentioned issue of observability, uses our current level of hydrological understanding, expressed in the form of a process-based model, to interrogate how climate and catchment characteristics interact to produce the observed hydrologic response. In this paper we present a general method of hydrologic analysis by means of a process-based model to support a bottom-up catchment classification system complementary to top-down classification methods. The model uses topographic, geomorphologic, soil and vegetation information at the catchment scale and conditions parameter values using readily available data on precipitation, temperature and streamflow. It is applicable to a wide range of catchments in different climate settings. We have developed a step-by-step procedure to analyze the observed hydrologic response and to assign parameter values related to specific components of the model. We applied this procedure to 12 catchments across a climate gradient east of the Rocky Mountains, USA. We show that the model is capable of reproducing the observed hydrologic behavior measured through hydrologic signatures chosen at different temporal scales. Next, we analyze the dominant time scales of catchment response and their dimensionless ratios with respect to climate and observable landscape features in an attempt to explain hydrologic partitioning. We find that only a limited number of model parameters can be related to observable landscape features. However, several climate-model time scales, and the associated dimensionless numbers, show scaling relationships with respect to the investigated hydrological signatures (runoff coefficient, baseflow index, and slope of the flow duration curve). Moreover, our analysis revealed systematic co-variation of climate, vegetation and soil related time scales along the climate gradient. If such co-variation can be shown to be robust across many catchments along different climate gradients, it opens perspective for model parameterization in ungauged catchments as well as prediction of hydrologic response in a rapidly changing environment.


2021 ◽  
Vol 11 (7) ◽  
pp. 2898
Author(s):  
Humberto C. Godinez ◽  
Esteban Rougier

Simulation of fracture initiation, propagation, and arrest is a problem of interest for many applications in the scientific community. There are a number of numerical methods used for this purpose, and among the most widely accepted is the combined finite-discrete element method (FDEM). To model fracture with FDEM, material behavior is described by specifying a combination of elastic properties, strengths (in the normal and tangential directions), and energy dissipated in failure modes I and II, which are modeled by incorporating a parameterized softening curve defining a post-peak stress-displacement relationship unique to each material. In this work, we implement a data assimilation method to estimate key model parameter values with the objective of improving the calibration processes for FDEM fracture simulations. Specifically, we implement the ensemble Kalman filter assimilation method to the Hybrid Optimization Software Suite (HOSS), a FDEM-based code which was developed for the simulation of fracture and fragmentation behavior. We present a set of assimilation experiments to match the numerical results obtained for a Split Hopkinson Pressure Bar (SHPB) model with experimental observations for granite. We achieved this by calibrating a subset of model parameters. The results show a steady convergence of the assimilated parameter values towards observed time/stress curves from the SHPB observations. In particular, both tensile and shear strengths seem to be converging faster than the other parameters considered.


Author(s):  
Ali Amasha

Abstract Background The flash flood still constitutes one of the major natural meteorological disasters harmfully threatening local communities, that creates life losses and destroying infrastructures. The severity and magnitude of disasters always reflected from the size of impacts. Most of the conventional research models related to flooding vulnerability are focusing on hydro-meteorological and morphometric measurements. It, however, requires quick estimate of the flood losses and assess the severity using reliable information. An automated zonal change detection model applied, using two high-resolution satellite images dated 2009 and 2011 coupled with LU/LC GIS layer, on western El-Arish City, downstream of Wadi El-Arish basin. The model enabled to estimate the severity of a past flood incident in 2010. Results The model calculated the total changes based on the before and after satellite images based on pixel-by-pixel comparison. The estimated direct-damages nearly 32,951 m2 of the total mapped LU/LC classes; (e.g., 11,407 m2 as 3.17% of the cultivated lands; 6031 m2 as 7.22% of the built-up areas and 4040 m2 as 3.62% of the paved roads network). The estimated cost of losses, in 2010 economic prices for the selected three LU/LC classes, is nearly 25 million USD, for the cultivation fruits and olives trees, ~ 4 million USD for built-up areas and ~ 1 million USD for paved roads network. Conclusion The disasters’ damage and loss estimation process takes many detailed data, longtime, and costed as well. The applied model accelerates the disaster risk mapping that provides an informative support for loss estimation. Therefore, decision-makers and professionals need to apply this model for quick the disaster risks management and recovery.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Guillaume Ropp ◽  
Vincent Lesur ◽  
Julien Baerenzung ◽  
Matthias Holschneider

Abstract We describe a new, original approach to the modelling of the Earth’s magnetic field. The overall objective of this study is to reliably render fast variations of the core field and its secular variation. This method combines a sequential modelling approach, a Kalman filter, and a correlation-based modelling step. Sources that most significantly contribute to the field measured at the surface of the Earth are modelled. Their separation is based on strong prior information on their spatial and temporal behaviours. We obtain a time series of model distributions which display behaviours similar to those of recent models based on more classic approaches, particularly at large temporal and spatial scales. Interesting new features and periodicities are visible in our models at smaller time and spatial scales. An important aspect of our method is to yield reliable error bars for all model parameters. These errors, however, are only as reliable as the description of the different sources and the prior information used are realistic. Finally, we used a slightly different version of our method to produce candidate models for the thirteenth edition of the International Geomagnetic Reference Field.


2010 ◽  
Vol 46 (11) ◽  
Author(s):  
Nicolas Zégre ◽  
Arne E. Skaugset ◽  
Nicholas A. Som ◽  
Jeffrey J. McDonnell ◽  
Lisa M. Ganio

2018 ◽  
Vol 51 (4) ◽  
pp. 1059-1068 ◽  
Author(s):  
Pascal Parois ◽  
James Arnold ◽  
Richard Cooper

Crystallographic restraints are widely used during refinement of small-molecule and macromolecular crystal structures. They can be especially useful for introducing additional observations and information into structure refinements against low-quality or low-resolution data (e.g. data obtained at high pressure) or to retain physically meaningful parameter values in disordered or unstable refinements. However, despite the fact that the anisotropic displacement parameters (ADPs) often constitute more than half of the total model parameters determined in a structure analysis, there are relatively few useful restraints for them, examples being Hirshfeld rigid-bond restraints, direct equivalence of parameters and SHELXL RIGU-type restraints. Conversely, geometric parameters can be subject to a multitude of restraints (e.g. absolute or relative distance, angle, planarity, chiral volume, and geometric similarity). This article presents a series of new ADP restraints implemented in CRYSTALS [Parois, Cooper & Thompson (2015), Chem. Cent. J. 9, 30] to give more control over ADPs by restraining, in a variety of ways, the directions and magnitudes of the principal axes of the ellipsoids in locally defined coordinate systems. The use of these new ADPs results in more realistic models, as well as a better user experience, through restraints that are more efficient and faster to set up. The use of these restraints is recommended to preserve physically meaningful relationships between displacement parameters in a structural model for rigid bodies, rotationally disordered groups and low-completeness data.


2013 ◽  
Vol 17 (2) ◽  
pp. 817-828 ◽  
Author(s):  
M. Stoelzle ◽  
K. Stahl ◽  
M. Weiler

Abstract. Streamflow recession has been investigated by a variety of methods, often involving the fit of a model to empirical recession plots to parameterize a non-linear storage–outflow relationship based on the dQ/dt−Q method. Such recession analysis methods (RAMs) are used to estimate hydraulic conductivity, storage capacity, or aquifer thickness and to model streamflow recession curves for regionalization and prediction at the catchment scale. Numerous RAMs have been published, but little is known about how comparably the resulting recession models distinguish characteristic catchment behavior. In this study we combined three established recession extraction methods with three different parameter-fitting methods to the power-law storage–outflow model to compare the range of recession characteristics that result from the application of these different RAMs. Resulting recession characteristics including recession time and corresponding storage depletion were evaluated for 20 meso-scale catchments in Germany. We found plausible ranges for model parameterization; however, calculated recession characteristics varied over two orders of magnitude. While recession characteristics of the 20 catchments derived with the different methods correlate strongly, particularly for the RAMs that use the same extraction method, not all rank the catchments consistently, and the differences among some of the methods are larger than among the catchments. To elucidate this variability we discuss the ambiguous roles of recession extraction procedures and the parameterization of the storage–outflow model and the limitations of the presented recession plots. The results suggest strong limitations to the comparability of recession characteristics derived with different methods, not only in the model parameters but also in the relative characterization of different catchments. A multiple-methods approach to investigating streamflow recession characteristics should be considered for applications whenever possible.


2019 ◽  
Author(s):  
Arsenii Dokuchaev ◽  
Svyatoslav Khamzin ◽  
Olga Solovyova

AbstractAgeing is the dominant risk factor for cardiovascular diseases. A great body of experimental data has been gathered on cellular remodelling in the Ageing myocardium from animals. Very few experimental data are available on age-related changes in the human cardiomyocyte. We have used our combined electromechanical model of the human cardiomyocyte and the population modelling approach to investigate the variability in the response of cardiomyocytes to age-related changes in the model parameters. To generate the model population, we varied nine model parameters and excluded model samples with biomarkers falling outside of the physiological ranges. We evaluated the response to age-related changes in four electrophysiological model parameters reported in the literature: reduction in the density of the K+ transient outward current, maximal velocity of SERCA, and an increase in the density of NaCa exchange current and CaL-type current. The sensitivity of the action potential biomarkers to individual parameter variations was assessed. Each parameter modulation caused an increase in APD, while the sensitivity of the model to changes in GCaL and Vmax_up was much higher than to those in the effects of Gto and KNaCa. Then 60 age-related sets of the four parameters were randomly generated and each set was applied to every model in the control population. We calculated the frequency of model samples with repolarisation anomalies (RA) and the shortening of the electro-mechanical window in the ageing model populations as an arrhythmogenic ageing score. The linear dependence of the score on the deviation of the parameters showed a high determination coefficient with the most significant impact due to the age-related change in the CaL current. The population-based approach allowed us to classify models with low and high risk of age-related RA and to predict risks based on the control biomarkers.


Geophysics ◽  
2021 ◽  
pp. 1-73
Author(s):  
Bastien Dupuy ◽  
Anouar Romdhane ◽  
Pierre-Louis Nordmann ◽  
Peder Eliasson ◽  
Joonsang Park

Risk assessment of CO2 storage requires the use of geophysical monitoring techniques to quantify changes in selected reservoir properties such as CO2 saturation, pore pressure and porosity. Conformance monitoring and associated decision-making rest upon the quantified properties derived from geophysical data, with uncertainty assessment. A general framework combining seismic and controlled source electromagnetic inversions with rock physics inversion is proposed with fully Bayesian formulations for proper quantification of uncertainty. The Bayesian rock physics inversion rests upon two stages. First, a search stage consists in exploring the model space and deriving models with associated probability density function (PDF). Second, an appraisal or importance sampling stage is used as a "correction" step to ensure that the full model space is explored and that the estimated posterior PDF can be used to derive quantities like marginal probability densities. Both steps are based on the neighbourhood algorithm. The approach does not require any linearization of the rock physics model or assumption about the model parameters distribution. After describing the CO2 storage context, the available data at the Sleipner field before and after CO2 injection (baseline and monitor), and the rock physics models, we perform an extended sensitivity study. We show that prior information is crucial, especially in the monitor case. We demonstrate that joint inversion of seismic and CSEM data is also key to quantify CO2 saturations properly. We finally apply the full inversion strategy to real data from Sleipner. We obtain rock frame moduli, porosity, saturation and patchiness exponent distributions and associated uncertainties along a 1D profile before and after injection. The results are consistent with geology knowledge and reservoir simulations, i.e., that the CO2 saturations are larger under the caprock confirming the CO2 upward migration by buoyancy effect. The estimates of patchiness exponent have a larger uncertainty, suggesting semi-patchy mixing behaviour.


Sign in / Sign up

Export Citation Format

Share Document