scholarly journals Identification of behavioural model input data sets for WWTP uncertainty analysis

2019 ◽  
Vol 81 (8) ◽  
pp. 1558-1568 ◽  
Author(s):  
E. Lindblom ◽  
U. Jeppsson ◽  
G. Sin

Abstract Uncertainty analysis is important for wastewater treatment plant (WWTP) model applications. An important aspect of uncertainty analysis is the identification and proper quantification of sources of uncertainty. In this contribution, a methodology to identify an ensemble of behavioural model representations (combinations of input data, model structure and parameter values) is presented and evaluated. The outcome is a multivariate conditional distribution of input data that is used for generating samples of likely inputs (such as Monte Carlo input samples) to perform WWTP model uncertainty analysis. This article presents an approach to verify uncertainty distributions of input data (otherwise often assumed) by using historical observations and actual plant data.

2018 ◽  
Vol 14 (04) ◽  
pp. 43
Author(s):  
Zhang Xueya ◽  
Jianwei Zhang

<p>A new method for the big data analysis - multi-granularity generalized functions data model (referred to as MGGF for short) is put forward. This method adopts the dynamic adaptive multi-granularity clustering technique, transforms the grid like "Hard partitioning" to the input data space by the generalized functions data model (referred to as GFDM for short) into the multi-granularity partitioning, and identifies the multi-granularity pattern class in the input data space. By defining the type of the mapping relationship between the multi-granularity model class and the decision-making category ftype:Ci→y, and the concept of the Degree of Fulfillment (referred to as DoF (x)) of the input data to the classification rules of the various pattern classes, the corresponding MGGF model is established. Experimental test results of different data sets show that, compared with the GFDM method, the method proposed in this paper has better data summarization ability, stronger noise data processing ability and higher searching efficiency.</p>


2004 ◽  
Vol 61 (6) ◽  
pp. 1032-1047 ◽  
Author(s):  
Catherine GJ Michielsens ◽  
Murdoch K McAllister

Stock–recruit functions are important in fisheries stock assessment, but there is often uncertainty surrounding the appropriate stock–recruit model and its parameter values. Combining different stock–recruit data sets of related species through Bayesian hierarchical analysis can decrease these uncertainties and help to characterize appropriate stock–recruit forms and ranges of plausible parameter values. Two different stock–recruit functions (Beverton–Holt and Ricker) have been parameterized in terms of the steepness, which is a parameter that is comparable between populations. In the hierarchical analysis, the prior probability distribution of parameters for the cross-population variation in steepness is determined through a concise model structure. By calculating the Bayes' posteriors for alternative model forms, model uncertainty is accounted for. This methodology has been applied to Atlantic salmon (Salmo salar) stock–recruit data to provide predictions for the steepness of the stock–recruit function for Baltic salmon for which no stock–recruit data exist.


1999 ◽  
Vol 39 (4) ◽  
pp. 55-60 ◽  
Author(s):  
J. Alex ◽  
R. Tschepetzki ◽  
U. Jumar ◽  
F. Obenaus ◽  
K.-H. Rosenwinkel

Activated sludge models are widely used for planning and optimisation of wastewater treatment plants and on line applications are under development to support the operation of complex treatment plants. A proper model is crucial for all of these applications. The task of parameter calibration is focused in several papers and applications. An essential precondition for this task is an appropriately defined model structure, which is often given much less attention. Different model structures for a large scale treatment plant with circulation flow are discussed in this paper. A more systematic method to derive a suitable model structure is applied to this case. Results of a numerical hydraulic model are used for this purpose. The importance of these efforts are proven by a high sensitivity of the simulation results with respect to the selection of the model structure and the hydraulic conditions. Finally it is shown, that model calibration was possible only by adjusting to the hydraulic behaviour and without any changes of biological parameters.


2016 ◽  
Vol 3 (1) ◽  
Author(s):  
LAL SINGH ◽  
PARMEET SINGH ◽  
RAIHANA HABIB KANTH ◽  
PURUSHOTAM SINGH ◽  
SABIA AKHTER ◽  
...  

WOFOST version 7.1.3 is a computer model that simulates the growth and production of annual field crops. All the run options are operational through a graphical user interface named WOFOST Control Center version 1.8 (WCC). WCC facilitates selecting the production level, and input data sets on crop, soil, weather, crop calendar, hydrological field conditions, soil fertility parameters and the output options. The files with crop, soil and weather data are explained, as well as the run files and the output files. A general overview is given of the development and the applications of the model. Its underlying concepts are discussed briefly.


2007 ◽  
Vol 56 (6) ◽  
pp. 75-83 ◽  
Author(s):  
X. Flores ◽  
J. Comas ◽  
I.R. Roda ◽  
L. Jiménez ◽  
K.V. Gernaey

The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.


2020 ◽  
Vol 16 (3) ◽  
pp. 1061-1074 ◽  
Author(s):  
Jörg Franke ◽  
Veronika Valler ◽  
Stefan Brönnimann ◽  
Raphael Neukom ◽  
Fernando Jaume-Santero

Abstract. Differences between paleoclimatic reconstructions are caused by two factors: the method and the input data. While many studies compare methods, we will focus in this study on the consequences of the input data choice in a state-of-the-art Kalman-filter paleoclimate data assimilation approach. We evaluate reconstruction quality in the 20th century based on three collections of tree-ring records: (1) 54 of the best temperature-sensitive tree-ring chronologies chosen by experts; (2) 415 temperature-sensitive tree-ring records chosen less strictly by regional working groups and statistical screening; (3) 2287 tree-ring series that are not screened for climate sensitivity. The three data sets cover the range from small sample size, small spatial coverage and strict screening for temperature sensitivity to large sample size and spatial coverage but no screening. Additionally, we explore a combination of these data sets plus screening methods to improve the reconstruction quality. A large, unscreened collection generally leads to a poor reconstruction skill. A small expert selection of extratropical Northern Hemisphere records allows for a skillful high-latitude temperature reconstruction but cannot be expected to provide information for other regions and other variables. We achieve the best reconstruction skill across all variables and regions by combining all available input data but rejecting records with insignificant climatic information (p value of regression model >0.05) and removing duplicate records. It is important to use a tree-ring proxy system model that includes both major growth limitations, temperature and moisture.


2014 ◽  
Vol 11 (4) ◽  
pp. 597-608
Author(s):  
Dragan Antic ◽  
Miroslav Milovanovic ◽  
Stanisa Peric ◽  
Sasa Nikolic ◽  
Marko Milojkovic

The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data.


2015 ◽  
Vol 12 (8) ◽  
pp. 7437-7467 ◽  
Author(s):  
J. E. Reynolds ◽  
S. Halldin ◽  
C. Y. Xu ◽  
J. Seibert ◽  
A. Kauffeldt

Abstract. Concentration times in small and medium-sized watersheds (~ 100–1000 km2) are commonly less than 24 h. Flood-forecasting models then require data at sub-daily time scales, but time-series of input and runoff data with sufficient lengths are often only available at the daily time scale, especially in developing countries. This has led to a search for time-scale relationships to infer parameter values at the time scales where they are needed from the time scales where they are available. In this study, time-scale dependencies in the HBV-light conceptual hydrological model were assessed within the generalized likelihood uncertainty estimation (GLUE) approach. It was hypothesised that the existence of such dependencies is a result of the numerical method or time-stepping scheme used in the models rather than a real time-scale-data dependence. Parameter values inferred showed a clear dependence on time scale when the explicit Euler method was used for modelling at the same time steps as the time scale of the input data (1–24 h). However, the dependence almost fully disappeared when the explicit Euler method was used for modelling in 1 h time steps internally irrespectively of the time scale of the input data. In other words, it was found that when an adequate time-stepping scheme was implemented, parameter sets inferred at one time scale (e.g., daily) could be used directly for runoff simulations at other time scales (e.g., 3 or 6 h) without any time scaling and this approach only resulted in a small (if any) model performance decrease, in terms of Nash–Sutcliffe and volume-error efficiencies. The overall results of this study indicated that as soon as sub-daily driving data can be secured, flood forecasting in watersheds with sub-daily concentration times is possible with model-parameter values inferred from long time series of daily data, as long as an appropriate numerical method is used.


Sign in / Sign up

Export Citation Format

Share Document