A m-fold-decimation-based technique for model validation using a single system output data set

2004 ◽  
Vol 65 (3) ◽  
pp. 273-288
Author(s):  
Dimosthenis Anagnostopoulos ◽  
Vassilis Dalakas ◽  
Mara Nikolaidou
Author(s):  
Wilfried Mirschel ◽  
Karl-Otto Wenkel ◽  
Martin Wegehenkel ◽  
Kurt Christian Kersebaum ◽  
Uwe Schindler ◽  
...  

2019 ◽  
Vol 270 ◽  
pp. 04015
Author(s):  
Edy Anto Soentoro ◽  
Nina Pebriana

Reservoir operations, especially those which regulate the outflow (release) volume, are crucial for the fulfillment of the purpose to build the reservoir. To get the best results, outflow (release) discharges need to be optimized to meet the objectives of the reservoir operation. A fuzzy rule-based model was used in this study because it can deal with uncertainty constraints and objects without clear or well-defined boundaries. The objective of this study is to determine the maximum total release volume based on water availability (i.e., a monthly release is equal to or more than monthly demand). The case study is located at Darma reservoir. A fuzzy rule-based model was used to optimize the monthly release volume, and the result was compared with that of NLP and the demand. The Sugeno fuzzy method was used to generate fuzzy rules from a given input-output data set that consisted of demand, inflow, storage, and release. The results of this study showed that the release of Sugeno method and the demand have the same basic pattern, in which the release fulfill the demand. The overall result showed that the fuzzy rule-based model with Sugeno method can be used for optimization based on real-life experiences from experts that are used to working in the field.


2011 ◽  
Vol 21 (03) ◽  
pp. 247-263 ◽  
Author(s):  
J. P. FLORIDO ◽  
H. POMARES ◽  
I. ROJAS

In function approximation problems, one of the most common ways to evaluate a learning algorithm consists in partitioning the original data set (input/output data) into two sets: learning, used for building models, and test, applied for genuine out-of-sample evaluation. When the partition into learning and test sets does not take into account the variability and geometry of the original data, it might lead to non-balanced and unrepresentative learning and test sets and, thus, to wrong conclusions in the accuracy of the learning algorithm. How the partitioning is made is therefore a key issue and becomes more important when the data set is small due to the need of reducing the pessimistic effects caused by the removal of instances from the original data set. Thus, in this work, we propose a deterministic data mining approach for a distribution of a data set (input/output data) into two representative and balanced sets of roughly equal size taking the variability of the data set into consideration with the purpose of allowing both a fair evaluation of learning's accuracy and to make reproducible machine learning experiments usually based on random distributions. The sets are generated using a combination of a clustering procedure, especially suited for function approximation problems, and a distribution algorithm which distributes the data set into two sets within each cluster based on a nearest-neighbor approach. In the experiments section, the performance of the proposed methodology is reported in a variety of situations through an ANOVA-based statistical study of the results.


2017 ◽  
Author(s):  
Bernardo A. Mello ◽  
Yuhai Tu

To decipher molecular mechanisms in biological systems from system-level input-output data is challenging especially for complex processes that involve interactions among multiple components. Here, we study regulation of the multi-domain (P1-5) histidine kinase CheA by the MCP chemoreceptors. We develop a network model to describe dynamics of the system treating the receptor complex with CheW and P3P4P5 domains of CheA as a regulated enzyme with two substrates, P1 and ATP. The model enables us to search the hypothesis space systematically for the simplest possible regulation mechanism consistent with the available data. Our analysis reveals a novel dual regulation mechanism wherein besides regulating ATP binding the receptor activity has to regulate one other key reaction, either P1 binding or phosphotransfer between P1 and ATP. Furthermore, our study shows that the receptors only control kinetic rates of the enzyme without changing its equilibrium properties. Predictions are made for future experiments to distinguish the remaining two dual-regulation mechanisms. This systems-biology approach of combining modeling and a large input-output data-set should be applicable for studying other complex biological processes.


Micromachines ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 1390
Author(s):  
Khalid A. Alattas ◽  
Ardashir Mohammadzadeh ◽  
Saleh Mobayen ◽  
Ayman A. Aly ◽  
Bassem F. Felemban ◽  
...  

In this study, a novel data-driven control scheme is presented for MEMS gyroscopes (MEMS-Gs). The uncertainties are tackled by suggested type-3 fuzzy system with non-singleton fuzzification (NT3FS). Besides the dynamics uncertainties, the suggested NT3FS can also handle the input measurement errors. The rules of NT3FS are online tuned to better compensate the disturbances. By the input-output data set a data-driven scheme is designed, and a new LMI set is presented to ensure the stability. By several simulations and comparisons the superiority of the introduced control scheme is demonstrated.


2019 ◽  
Vol 2 (1) ◽  
pp. 13-32 ◽  
Author(s):  
Fred-Johan Pettersen ◽  
Jan Olav Høgetveit

Abstract Tools such as Simpleware ScanIP+FE and COMSOL Multiphysics allow us to gain a better understanding of bioimpedance measurements without actually doing the measurements. This tutorial will cover the steps needed to go from a 3D voxel data set to a model that can be used to simulate a transfer impedance measurement. Geometrical input data used in this tutorial are from MRI scan of a human thigh, which are converted to a mesh using Simpleware ScanIP+FE. The mesh is merged with electrical properties for the relevant tissues, and a simulation is done in COMSOL Multiphysics. Available numerical output data are transfer impedance, contribution from different tissues to final transfer impedance, and voltages at electrodes. Available volume output data are normal and reciprocal current densities, potential, sensitivity, and volume impedance sensitivity. The output data are presented as both numbers and graphs. The tutorial will be useful even if data from other sources such as VOXEL-MAN or CT scans are used.


2006 ◽  
Vol 4 (1) ◽  
pp. 97
Author(s):  
Alan Cosme Rodrigues da Silva ◽  
Claudio Henrique Da Silveira Barbedo ◽  
Gustavo Silva Araújo ◽  
Myrian Beatriz Eiras das Neves

The purpose of this paper is to analyze backtesting methodologies of VaR, focusing on aspects as suitability to volatile markets and limited data set. We verify, from regulatory standpoint, tests to complement the Basel traffic light results, using simulated and real data. The results indicate that tests based on failures proportion are not adequate for small samples even fro 1,000 observations. The Basel criterion is conservative and has low power, which does not invalidate its application, as the criterion is only one of the procedures adopted in internal model validation process. Thus, it is suggested using tests that capture the shape of returns distribution, as the Kuiper test, in addition to the Basel criterion.


Sign in / Sign up

Export Citation Format

Share Document