2019 ◽  
Vol 270 ◽  
pp. 04015
Author(s):  
Edy Anto Soentoro ◽  
Nina Pebriana

Reservoir operations, especially those which regulate the outflow (release) volume, are crucial for the fulfillment of the purpose to build the reservoir. To get the best results, outflow (release) discharges need to be optimized to meet the objectives of the reservoir operation. A fuzzy rule-based model was used in this study because it can deal with uncertainty constraints and objects without clear or well-defined boundaries. The objective of this study is to determine the maximum total release volume based on water availability (i.e., a monthly release is equal to or more than monthly demand). The case study is located at Darma reservoir. A fuzzy rule-based model was used to optimize the monthly release volume, and the result was compared with that of NLP and the demand. The Sugeno fuzzy method was used to generate fuzzy rules from a given input-output data set that consisted of demand, inflow, storage, and release. The results of this study showed that the release of Sugeno method and the demand have the same basic pattern, in which the release fulfill the demand. The overall result showed that the fuzzy rule-based model with Sugeno method can be used for optimization based on real-life experiences from experts that are used to working in the field.


2004 ◽  
Vol 65 (3) ◽  
pp. 273-288
Author(s):  
Dimosthenis Anagnostopoulos ◽  
Vassilis Dalakas ◽  
Mara Nikolaidou

2011 ◽  
Vol 21 (03) ◽  
pp. 247-263 ◽  
Author(s):  
J. P. FLORIDO ◽  
H. POMARES ◽  
I. ROJAS

In function approximation problems, one of the most common ways to evaluate a learning algorithm consists in partitioning the original data set (input/output data) into two sets: learning, used for building models, and test, applied for genuine out-of-sample evaluation. When the partition into learning and test sets does not take into account the variability and geometry of the original data, it might lead to non-balanced and unrepresentative learning and test sets and, thus, to wrong conclusions in the accuracy of the learning algorithm. How the partitioning is made is therefore a key issue and becomes more important when the data set is small due to the need of reducing the pessimistic effects caused by the removal of instances from the original data set. Thus, in this work, we propose a deterministic data mining approach for a distribution of a data set (input/output data) into two representative and balanced sets of roughly equal size taking the variability of the data set into consideration with the purpose of allowing both a fair evaluation of learning's accuracy and to make reproducible machine learning experiments usually based on random distributions. The sets are generated using a combination of a clustering procedure, especially suited for function approximation problems, and a distribution algorithm which distributes the data set into two sets within each cluster based on a nearest-neighbor approach. In the experiments section, the performance of the proposed methodology is reported in a variety of situations through an ANOVA-based statistical study of the results.


2017 ◽  
Author(s):  
Bernardo A. Mello ◽  
Yuhai Tu

To decipher molecular mechanisms in biological systems from system-level input-output data is challenging especially for complex processes that involve interactions among multiple components. Here, we study regulation of the multi-domain (P1-5) histidine kinase CheA by the MCP chemoreceptors. We develop a network model to describe dynamics of the system treating the receptor complex with CheW and P3P4P5 domains of CheA as a regulated enzyme with two substrates, P1 and ATP. The model enables us to search the hypothesis space systematically for the simplest possible regulation mechanism consistent with the available data. Our analysis reveals a novel dual regulation mechanism wherein besides regulating ATP binding the receptor activity has to regulate one other key reaction, either P1 binding or phosphotransfer between P1 and ATP. Furthermore, our study shows that the receptors only control kinetic rates of the enzyme without changing its equilibrium properties. Predictions are made for future experiments to distinguish the remaining two dual-regulation mechanisms. This systems-biology approach of combining modeling and a large input-output data-set should be applicable for studying other complex biological processes.


Micromachines ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 1390
Author(s):  
Khalid A. Alattas ◽  
Ardashir Mohammadzadeh ◽  
Saleh Mobayen ◽  
Ayman A. Aly ◽  
Bassem F. Felemban ◽  
...  

In this study, a novel data-driven control scheme is presented for MEMS gyroscopes (MEMS-Gs). The uncertainties are tackled by suggested type-3 fuzzy system with non-singleton fuzzification (NT3FS). Besides the dynamics uncertainties, the suggested NT3FS can also handle the input measurement errors. The rules of NT3FS are online tuned to better compensate the disturbances. By the input-output data set a data-driven scheme is designed, and a new LMI set is presented to ensure the stability. By several simulations and comparisons the superiority of the introduced control scheme is demonstrated.


2019 ◽  
Vol 2 (1) ◽  
pp. 13-32 ◽  
Author(s):  
Fred-Johan Pettersen ◽  
Jan Olav Høgetveit

Abstract Tools such as Simpleware ScanIP+FE and COMSOL Multiphysics allow us to gain a better understanding of bioimpedance measurements without actually doing the measurements. This tutorial will cover the steps needed to go from a 3D voxel data set to a model that can be used to simulate a transfer impedance measurement. Geometrical input data used in this tutorial are from MRI scan of a human thigh, which are converted to a mesh using Simpleware ScanIP+FE. The mesh is merged with electrical properties for the relevant tissues, and a simulation is done in COMSOL Multiphysics. Available numerical output data are transfer impedance, contribution from different tissues to final transfer impedance, and voltages at electrodes. Available volume output data are normal and reciprocal current densities, potential, sensitivity, and volume impedance sensitivity. The output data are presented as both numbers and graphs. The tutorial will be useful even if data from other sources such as VOXEL-MAN or CT scans are used.


2021 ◽  
Author(s):  
Matthias Schneider ◽  
Benjamin Ertl ◽  
Christopher J. Diekmann ◽  
Farahnaz Khosrawi ◽  
Andreas Weber ◽  
...  

Abstract. IASI (Infrared Atmospheric Sounding Interferometer) is the core instrument of the currently three Metop (Meteorological operational) satellites of EUMETSAT (European Organization for the Exploitation of Meteorological Satellites). The MUSICA IASI processing has been developed in the framework of the European Research Council project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water). The processor performs an optimal estimation of the vertical distributions of water vapour (H2O), the ratio between two water vapour isotopologues (the HDO / H2O ratio), nitrous oxide (N2O), methane (CH4), and nitric acid (HNO3), and works with IASI radiances measured under cloud-free conditions in the spectral window between 1190 and 1400 cm−1. The retrieval of the trace gas profiles is performed on a logarithmic scale, which allows the constraint and the analytic treatment of ln[HDO] – ln[H2O] as proxy for the HDO / H2O ratio. Currently, the MUSICA IASI processing has been applied to all IASI measurements available between October 2014 and April 2020, so more than 1.4 billion individual retrievals have been performed. Here we describe the MUSICA IASI full retrieval product data set. The data set is made available in form of netcdf data files that are compliant with version 1.7 of the CF (Climate and Forecast) metadata convention. For each orbit an individual standard output data file is provided. These files contain for each individual retrieval information on the a priori usage and constraint, the retrieved atmospheric trace gas and temperature profiles, profiles of the leading error components, information on vertical representativeness in form of the averaging kernels as well as averaging kernel metrics, which are more handy than the full kernels. We discuss data filtering options and give examples of the high horizontal and continuous temporal coverage of the MUSICA IASI data products. The standard output data files provide comprehensive information for each individual retrieval resulting in a rather large data volume (about 25 TB for the more than five years of data with global daily coverage). This at a first glance apparent drawback of large data files and data volume is counterbalanced by multiple possibilities of data reusability, which are briefly discussed. In an extended output data file the same variables as in the standard output data files are provided in addition to Jacobians for many different uncertainty sources and Gain matrices (due to this additional variables it is called the extended output). It is limited to 74 observations over a polar, mid-latitudinal and tropical site. We use this additional Jacobian and Gain data for assessing the typical impact of different uncertainty sources – like surface emissivity or spectroscopic parameters – and different cloud types on the retrieval results. We offer two data packages with DOI for free download via the repository RADAR4KIT. The first data package has a data volume of about 17.5 GB and is linked to https://doi.org/10.35097/408 (Schneider, et al., 2021b). It contains example standard output data files for all MUSICA IASI retrievals made for a single day (more than 0.6 million). Furthermore, it includes a ReadMe.pdf file with a description of how to access the total data set (the 25 TB) or parts of it. This data package is for users interested in the typical global daily data coverage and in information about how to download the large data volumes of global daily data for longer periods. The second data package is linked to https://doi.org/10.35097/412 (Schneider et al., 2021a) and contains the extended output data file. Because it provides data for only 74 example retrievals, its data volume is only 73 MB and it is thus recommended to users for having a quick look on the data.


Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. S1-S17 ◽  
Author(s):  
Alison E. Malcolm ◽  
Maarten V. de Hoop ◽  
Jérôme H. Le Rousseau

Reflection seismic data continuation is the computation of data at source and receiver locations that differ from those in the original data, using whatever data are available. We develop a general theory of data continuation in the presence of caustics and illustrate it with three examples: dip moveout (DMO), azimuth moveout (AMO), and offset continuation. This theory does not require knowledge of the reflector positions. We construct the output data set from the input through the composition of three operators: an imaging operator, a modeling operator, and a restriction operator. This results in a single operator that maps directly from the input data to the desired output data. We use the calculus of Fourier integral operators to develop this theory in the presence of caustics. For both DMO and AMO, we compute impulse responses in a constant-velocity model and in a more complicated model in which caustics arise. This analysis reveals errors that can be introduced by assuming, for example, a model with a constant vertical velocity gradient when the true model is laterally heterogeneous. Data continuation uses as input a subset (common offset, common angle) of the available data, which may introduce artifacts in the continued data. One could suppress these artifacts by stacking over a neighborhood of input data (using a small range of offsets or angles, for example). We test data continuation on synthetic data from a model known to generate imaging artifacts. We show that stacking over input scattering angles suppresses artifacts in the continued data.


2008 ◽  
Vol 8 (5) ◽  
pp. 17581-17629
Author(s):  
N. Theys ◽  
M. Van Roozendael ◽  
Q. Errera ◽  
F. Hendrick ◽  
F. Daerden ◽  
...  

Abstract. A new climatology of stratospheric BrO profiles based on dynamical and chemical indicators has been developed, with the aim to apply it to the retrieval of tropospheric BrO columns from space nadir measurements. The suitability of the adopted parameterization is evaluated based on three years of output data from the 3-D chemistry transport model BASCOE. The impact of the atmospheric dynamics on the stratospheric BrO distribution is treated by means of Bry/ozone correlations build from 3-D-CTM model results, while photochemical effects are taken into account using stratospheric NO2 columns as an indicator of the BrO/Bry ratio. The model simulations have been optimized for bromine chemistry and budget, and validated through comparisons using an extensive data set of ground-based, balloon-borne and satellite limb (SCIAMACHY) stratospheric BrO observations.


Sign in / Sign up

Export Citation Format

Share Document