scholarly journals Linear Optimal Runoff Aggregate (LORA): A global gridded synthesis runoff product

2018 ◽  
Author(s):  
Sanaa Hobeichi ◽  
Gab Abramowitz ◽  
Jason Evans ◽  
Hylke E. Beck

Abstract. No synthesized global gridded runoff product, derived from multiple sources, is available despite such a product being useful to meet the needs of many global water initiatives. We apply an optimal weighting approach to merge runoff estimates from hydrological models constrained with observational streamflow records. The weighting method is based on the ability of the models to match observed streamflow data while accounting for error covariance between the participating products. To address the lack of observed streamflow for many regions, a dissimilarity method was applied to transfer the weights of the participating products to the ungauged basins from the closest gauged basins using dissimilarity between basins in physiographic and climatic characteristics as a proxy for distance. We perform out-of-sample tests to examine the success of the dissimilarity approach and we confirm that the weighted product performs better than its 11 constituents products in a range of metrics. Our resulting synthesized global gridded runoff product is available at monthly time scales, and includes time variant uncertainty, for the period 1980–2012 on a 0.5° grid. The synthesized global gridded runoff product broadly agrees with published runoff estimates at many river basins, and represents well the seasonal runoff cycle for most of the globe. The new product, called Linear Optimal Runoff Aggregate (LORA), is a valuable synthesis of existing runoff products and will be freely available for download on https://geonetwork.nci.org.au/.

2019 ◽  
Vol 23 (2) ◽  
pp. 851-870 ◽  
Author(s):  
Sanaa Hobeichi ◽  
Gab Abramowitz ◽  
Jason Evans ◽  
Hylke E. Beck

Abstract. No synthesized global gridded runoff product, derived from multiple sources, is available, despite such a product being useful for meeting the needs of many global water initiatives. We apply an optimal weighting approach to merge runoff estimates from hydrological models constrained with observational streamflow records. The weighting method is based on the ability of the models to match observed streamflow data while accounting for error covariance between the participating products. To address the lack of observed streamflow for many regions, a dissimilarity method was applied to transfer the weights of the participating products to the ungauged basins from the closest gauged basins using dissimilarity between basins in physiographic and climatic characteristics as a proxy for distance. We perform out-of-sample tests to examine the success of the dissimilarity approach, and we confirm that the weighted product performs better than its 11 constituent products in a range of metrics. Our resulting synthesized global gridded runoff product is available at monthly timescales, and includes time-variant uncertainty, for the period 1980–2012 on a 0.5∘ grid. The synthesized global gridded runoff product broadly agrees with published runoff estimates at many river basins, and represents the seasonal runoff cycle for most of the globe well. The new product, called Linear Optimal Runoff Aggregate (LORA), is a valuable synthesis of existing runoff products and will be freely available for download on https://geonetwork.nci.org.au/geonetwork/srv/eng/catalog.search#/metadata/f9617_9854_8096_5291 (last access: 31 January 2019).


2018 ◽  
Vol 22 (2) ◽  
pp. 1317-1336 ◽  
Author(s):  
Sanaa Hobeichi ◽  
Gab Abramowitz ◽  
Jason Evans ◽  
Anna Ukkola

Abstract. Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000–2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.


2017 ◽  
Author(s):  
Sanaa Hobeichi ◽  
Gabriel Abramowitz ◽  
Jason Evans ◽  
Anna Ukkola

Abstract. Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, as well as being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000–2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data, and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information at the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in three common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.


2005 ◽  
Vol 80 (4) ◽  
pp. 1163-1192 ◽  
Author(s):  
Ranjani Krishnan ◽  
Joan L. Luft ◽  
Michael D. Shields

Performance-measure weights for incentive compensation are often determined subjectively. Determining these weights is a cognitively difficult task, and archival research shows that observed performance-measure weights are only partially consistent with the predictions of agency theory. Ittner et al. (2003) have concluded that psychology theory can help to explain such inconsistencies. In an experimental setting based on Feltham and Xie (1994), we use psychology theories of reasoning to predict distinctive patterns of similarity and difference between optimal and actual subjective performance-measure weights. The following predictions are supported. First, in contrast to a number of prior studies, most individuals' decisions are significantly influenced by the performance measures' error variance (precision) and error covariance. Second, directional errors in the use of these measurement attributes are relatively frequent, resulting in a mean underreaction to an accounting change that alters performance measurement error. Third, individuals seem insufficiently aware that a change in the accounting for one measure has spillover effects on the optimal weighting of the other measure in a two-measure incentive system. In consequence, they make performance-measure weighting decisions that are likely to result in misallocations of agent effort.


2018 ◽  
Vol 22 (8) ◽  
pp. 4593-4604 ◽  
Author(s):  
Yongqiang Zhang ◽  
David Post

Abstract. Gap-filling streamflow data is a critical step for most hydrological studies, such as streamflow trend, flood, and drought analysis and hydrological response variable estimates and predictions. However, there is a lack of quantitative evaluation of the gap-filled data accuracy in most hydrological studies. Here we show that when the missing data rate is less than 10 %, the gap-filled streamflow data obtained using calibrated hydrological models perform almost the same as the benchmark data (less than 1 % missing) when estimating annual trends for 217 unregulated catchments widely spread across Australia. Furthermore, the relative streamflow trend bias caused by the gap filling is not very large in very dry catchments where the hydrological model calibration is normally poor. Our results clearly demonstrate that the gap filling using hydrological modelling has little impact on the estimation of annual streamflow and its trends.


2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2020 ◽  
Vol 29 (2) ◽  
pp. 322-326
Author(s):  
Shanhe Wang ◽  
Yu Hua ◽  
Yu Xiang ◽  
Changjiang Huang ◽  
Yuanyuan Gao ◽  
...  

2020 ◽  
Vol 117 (20) ◽  
pp. 10762-10768
Author(s):  
Yang Yang ◽  
Wu Youyou ◽  
Brian Uzzi

Replicability tests of scientific papers show that the majority of papers fail replication. Moreover, failed papers circulate through the literature as quickly as replicating papers. This dynamic weakens the literature, raises research costs, and demonstrates the need for new approaches for estimating a study’s replicability. Here, we trained an artificial intelligence model to estimate a paper’s replicability using ground truth data on studies that had passed or failed manual replication tests, and then tested the model’s generalizability on an extensive set of out-of-sample studies. The model predicts replicability better than the base rate of reviewers and comparably as well as prediction markets, the best present-day method for predicting replicability. In out-of-sample tests on manually replicated papers from diverse disciplines and methods, the model had strong accuracy levels of 0.65 to 0.78. Exploring the reasons behind the model’s predictions, we found no evidence for bias based on topics, journals, disciplines, base rates of failure, persuasion words, or novelty words like “remarkable” or “unexpected.” We did find that the model’s accuracy is higher when trained on a paper’s text rather than its reported statistics and that n-grams, higher order word combinations that humans have difficulty processing, correlate with replication. We discuss how combining human and machine intelligence can raise confidence in research, provide research self-assessment techniques, and create methods that are scalable and efficient enough to review the ever-growing numbers of publications—a task that entails extensive human resources to accomplish with prediction markets and manual replication alone.


2019 ◽  
Vol 11 (1) ◽  
pp. 111
Author(s):  
Calvin W. H. Cheong ◽  
Sockalingam Ramasamy

Bank failures are costly to customers and the wider market. Prevention is always better than cure but in light of recent economic downturns, it has become increasingly difficult for regulators to allocate more resources towards in-depth monitoring of banking practices. In this paper, we construct a tool that is able to predict bank failures ahead of time with reasonable accuracy. Through a logistic regression on a matched sample of 536 failed and non-failed US banks, we determine the financial indicators that most accurately predicts bank failure. From the regression, we construct a Bank Health Index that assesses a bank’s propensity to failure. In-sample and out-of-sample tests show that our model is about 90% accurate two years prior to failure, and 95% accurate the year before failure. The accuracy and efficiency of the model and index provides a more efficient and effective tool for assessing a bank’s propensity to failure besides requiring far less resources. With these methods, regulators will be able to take preventive measures at least one year before failure, saving the economy millions if not billions in the process.


Sign in / Sign up

Export Citation Format

Share Document