scholarly journals Compensating Data Shortages in Manufacturing with Monotonicity Knowledge

Algorithms ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 345
Author(s):  
Martin von Kurnatowski ◽  
Jochen Schmid ◽  
Patrick Link ◽  
Rebekka Zache ◽  
Lukas Morand ◽  
...  

Systematic decision making in engineering requires appropriate models. In this article, we introduce a regression method for enhancing the predictive power of a model by exploiting expert knowledge in the form of shape constraints, or more specifically, monotonicity constraints. Incorporating such information is particularly useful when the available datasets are small or do not cover the entire input space, as is often the case in manufacturing applications. We set up the regression subject to the considered monotonicity constraints as a semi-infinite optimization problem, and propose an adaptive solution algorithm. The method is applicable in multiple dimensions and can be extended to more general shape constraints. It was tested and validated on two real-world manufacturing processes, namely, laser glass bending and press hardening of sheet metal. It was found that the resulting models both complied well with the expert’s monotonicity knowledge and predicted the training data accurately. The suggested approach led to lower root-mean-squared errors than comparative methods from the literature for the sparse datasets considered in this work.

Author(s):  
Hilal Bahlawan ◽  
Mirko Morini ◽  
Michele Pinelli ◽  
Pier Ruggero Spina ◽  
Mauro Venturini

This paper documents the set-up and validation of nonlinear autoregressive exogenous (NARX) models of a heavy-duty single-shaft gas turbine. The considered gas turbine is a General Electric PG 9351FA located in Italy. The data used for model training are time series data sets of several different maneuvers taken experimentally during the start-up procedure and refer to cold, warm and hot start-up. The trained NARX models are used to predict other experimental data sets and comparisons are made among the outputs of the models and the corresponding measured data. Therefore, this paper addresses the challenge of setting up robust and reliable NARX models, by means of a sound selection of training data sets and a sensitivity analysis on the number of neurons. Moreover, a new performance function for the training process is defined to weigh more the most rapid transients. The final aim of this paper is the set-up of a powerful, easy-to-build and very accurate simulation tool which can be used for both control logic tuning and gas turbine diagnostics, characterized by good generalization capability.


2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2021 ◽  
Vol 15 (5) ◽  
pp. 669-677
Author(s):  
Harumo Sasatake ◽  
Ryosuke Tasaki ◽  
Takahito Yamashita ◽  
Naoki Uchiyama ◽  
◽  
...  

Population aging has become a major problem in developed countries. As the labor force declines, robot arms are expected to replace human labor for simple tasks. A robotic arm attaches a tool specialized for a task and acquires the movement through teaching by an engineer with expert knowledge. However, the number of such engineers is limited; therefore, a teaching method that can be used by non-technical personnel is necessitated. As a teaching method, deep learning can be used to imitate human behavior and tool usage. However, deep learning requires a large amount of training data for learning. In this study, the target task of the robot is to sweep multiple pieces of dirt using a broom. The proposed learning system can estimate the initial parameters for deep learning based on experience, as well as the shape and physical properties of the tools. It can reduce the number of training data points when learning a new tool. A virtual reality system is used to move the robot arm easily and safely, as well as to create training data for imitation. In this study, cleaning experiments are conducted to evaluate the effectiveness of the proposed method. The experimental results confirm that the proposed method can accelerate the learning speed of deep learning and acquire cleaning ability using a small amount of training data.


2019 ◽  
Vol 79 (10) ◽  
pp. 1060-1078 ◽  
Author(s):  
Hans-Georg Schnürch ◽  
Sven Ackermann ◽  
Celine D. Alt-Radtke ◽  
Lukas Angleitner ◽  
Jana Barinoff ◽  
...  

Abstract Purpose This is an official guideline, published and coordinated by the Gynecological Oncology Working Group (AGO) of the German Cancer Society (DKG) and the German Society for Gynecology and Obstetrics (DGGG). Vaginal cancers are rare tumors, which is why there is very little evidence on these tumors. Knowledge about the optimal clinical management is limited. This first German S2k guideline on vaginal cancer has aimed to compile the most current expert knowledge and offer new recommendations on the appropriate treatment as well as providing pointers about individually adapted therapies with lower morbidity rates than were previously generally available. The purpose of this guideline is also to set up a register to record data on treatment data and the course of disease as a means of obtaining evidence in future. Methods The present S2k guideline was developed by members of the Vulvar und Vaginal Tumors Commission of the AGO in an independently moderated, structured, formal consensus process and the contents were agreed with the mandate holders of the participating scientific societies and organizations. Recommendations To optimize the daily care of patients with vaginal cancer: 1. Monitor the spread pattern; 2. Follow the step-by-step diagnostic workup based on initial stage at detection; 3. As part of individualized clinical therapeutic management of vaginal cancer, follow the sentinel lymph node protocol described here, where possible; 4. Participate in the register study on vaginal cancer.


2008 ◽  
Vol 39-40 ◽  
pp. 523-528
Author(s):  
Pavel Jirman ◽  
Ivo Matoušek

Improving technology and materials in glass production for the 21st century supposes implementation of high-level innovations. These innovations are necessary not to be only developed, produced and set up but also their qualities and perspectives need to be evaluated so that the ratio of their application is increased. The application ratio of developed innovations lies among 1-3% at present. All stages of glass processing like melting, forming or cold working have mostly limitations of its own further development which are necessary to be detected so that further possibility of innovation can be predicted. At present it is not sufficient to have only theoretic and expert knowledge of the field and IT applications but it is necessary to know the methods of creative thinking for achievement and application of required innovation. Understanding of the system of creative thinking makes possible to better and faster adapt to real life practice which changes very fast. TRIZ (Theory of Inventive Problem Solving) is a powerful method of creative technical thinking which originated by studying patents and by generalization of successful process solving. The method TRIZ makes possible to find a correct formulation of a task out of unclearly described situation as well as to solve the newly re-formulated task by using unique strong instruments of the TRIZ method [1]. Application of the TRIZ method is supported by a unique SW designed for collection of information, analyses, synthesis of solutions and verification of the found solutions. Practical examples of using the TRIZ method will be presented in the contribution on chosen glass technologies.


2010 ◽  
Vol 7 (2) ◽  
pp. 1-11 ◽  
Author(s):  
Matthias Lange ◽  
Karl Spies ◽  
Joachim Bargsten ◽  
Gregor Haberhauer ◽  
Matthias Klapperstück ◽  
...  

SummarySearch engines and retrieval systems are popular tools at a life science desktop. The manual inspection of hundreds of database entries, that reflect a life science concept or fact, is a time intensive daily work. Hereby, not the number of query results matters, but the relevance does. In this paper, we present the LAILAPS search engine for life science databases. The concept is to combine a novel feature model for relevance ranking, a machine learning approach to model user relevance profiles, ranking improvement by user feedback tracking and an intuitive and slim web user interface, that estimates relevance rank by tracking user interactions. Queries are formulated as simple keyword lists and will be expanded by synonyms. Supporting a flexible text index and a simple data import format, LAILAPS can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases.With a set of features, extracted from each database hit in combination with user relevance preferences, a neural network predicts user specific relevance scores. Using expert knowledge as training data for a predefined neural network or using users own relevance training sets, a reliable relevance ranking of database hits has been implemented.In this paper, we present the LAILAPS system, the concepts, benchmarks and use cases. LAILAPS is public available for SWISSPROT data at http://lailaps.ipk-gatersleben.de


2012 ◽  
Vol 18 (4) ◽  
pp. 491-519 ◽  
Author(s):  
MAXIM KHALILOV ◽  
KHALIL SIMA'AN

AbstractIn source reordering the order of the source words is permuted to minimize word order differences with the target sentence and then fed to a translation model. Earlier work highlights the benefits of resolving long-distance reorderings as a pre-processing step to standard phrase-based models. However, the potential performance improvement of source reordering and its impact on the components of the subsequent translation model remain unexplored. In this paper we study both aspects of source reordering. We set up idealized source reordering (oracle) models with/without syntax and present our own syntax-driven model of source reordering. The latter is a statistical model of inversion transduction grammar (ITG)-like tree transductions manipulating a syntactic parse and working with novel conditional reordering parameters. Having set up the models, we report translation experiments showing significant improvement on three language pairs, and contribute an extensive analysis of the impact of source reordering (both oracle and model) on the translation model regarding the quality of its input, phrase-table, and output. Our experiments show that oracle source reordering has untapped potential in improving translation system output. Besides solving difficult reorderings, we find that source reordering creates more monotone parallel training data at the back-end, leading to significantly larger phrase tables with higher coverage of phrase types in unseen data. Unfortunately, this nice property does not carry over to tree-constrained source reordering. Our analysis shows that, from the string-level perspective, tree-constrained reordering might selectively permute word order, leading to larger phrase tables but without increase in phrase coverage in unseen data.


1995 ◽  
Vol 21 (1-2) ◽  
pp. 81-101
Author(s):  
Dawn M. Cohen ◽  
Casimir Kulikowski ◽  
Helen Berman

2020 ◽  
Vol 34 (10) ◽  
pp. 13983-13984
Author(s):  
Qizhen Zhang ◽  
Audrey Durand ◽  
Joelle Pineau

Applications of machine learning in biomedical prediction tasks are often limited by datasets that are unrepresentative of the sampling population. In these situations, we can no longer rely only on the the training data to learn the relations between features and the prediction outcome. Our method proposes to learn an inductive bias that indicates the relevance of each feature to outcomes through literature mining in PubMed, a centralized source of biomedical documents. The inductive bias acts as a source of prior knowledge from experts, which we leverage by imposing an extra penalty for model weights that differ from this inductive bias. We empirically evaluate our method on a medical prediction task and highlight the importance of incorporating expert knowledge that can capture relations not present in the training data.


2020 ◽  
Vol 30 (1) ◽  
pp. 60-75
Author(s):  
Lyudmila V. Borisova ◽  
Inna N. Nurutdinova ◽  
Valeriy P. Dimitrov ◽  
Andrey K. Tugengold

Introduction.The article deals with adjusting the parameter settings of a combine harvester working bodies. For adjustment of complex hierarchical multilevel systems, the intellectual methods based on fuzzy expert information are used. The incoming quantitative, qualitative and evaluation information is analyzed when adjusting the combine harvester. The different types of uncertainty in considering semantic spaces of external environment factors and regulated parameters of the machine cause the application of logical and linguistic approach and mathematical apparatus of fuzzy logic for determining the optimal initial settings. The complex system of interrelations between parameters, indicators of quality of harvest, and factors of external environment causes the necessity to adjust the parameters of combine working elements in the process of harvesting. This function is performed by the correction unit in the intelligent decision support system. In the present article, the questions of creating a knowledge base for correcting adjustment parameters in cases when there are deviations of values of harvesting quality indicators from normative values are considered in detail. Materials and Methods. Interrelations between performance indicators and regulated parameters are established by empirical rules obtained through the collection and analysis of expert information. To optimize the mechanism of intellectual information system output and reduce the time of decision making, there is a necessity to establish the relevance of used knowledge base rules. To solve this problem, theoretical and game approaches are used, concepts of the matrix of performance indicators and the matrix of risks of making an inefficient decision are used. Results. An example of choosing a strategy of searching for an adequate response to the fault of the harvesting indices in the form of “losses of feeble grain with chaff” has been given. The choice of fault response strategies on the basis of Laplace criterion, expectedvalue criterion, and Savage test used for decision-making in “games with nature” has been considered. The method of the decision-making process in the problem under consideration with the application of the mentioned criteria were illustrated, the analysis of the obtained results was carried out. Discussion and Conclusion. The suggested approach substantially increases performance of the unit of intelligent system updating. It allows structuring the expert knowledge base and establishing an optimal sequence of application of production rules; this provides efficiency of the updating process of the adjustable harvester parameters and also reduces the time for decision-making. This approach can be used while solving the problems of updating technological adjustments in different technical systems and devices.


Sign in / Sign up

Export Citation Format

Share Document