Vibratory Data Fusion for Gearbox Fault Detection Using Autoassociative Neural Networks

Author(s):  
Alfonso Ferna´ndez del Rinco´n ◽  
Pablo Garci´a Ferna´ndez ◽  
Fernando Viadero Rueda ◽  
Ramo´n Sancibria´n Herrera

This paper deals with gearbox fault detection by vibration analysis. A new processing procedure is proposed which uses the information from several acquisition channels. The approach is based on the assumption that there is a non-linear relationship among the instantaneous vibration magnitude registered for each measurement location. This relationship is captured in the connection weight matrix of an Autoassociative Artificial Neural Network (AANN), which is trained to provide an output vector equal to the input one. In this work, the time synchronous average signal (TSA) for each channel corresponding to the no fault condition is used to train an AANN. Once the AANN is trained, it is used with new data registers as a linear prediction error filter. If the new register contains the same data structure as the training set the prediction error will be low and the machine is working properly. Otherwise, when the new register differs from the training set, as a consequence of a fault, prediction error will be increased in each channel. In this way the information from not only one channel but more than one is used for fault detection and diagnosis as the error signal depends on the TSA signal from all channels. The proposed approach provides a new tool for gear fault detection that is compared on the basis of experimental registers with the most traditional gear processing tools based on TSA such as residual and regular signals. The possibility of generalizing the net prediction capabilities using a training data set that contains several load cases is also explored.

2014 ◽  
Vol 539 ◽  
pp. 181-184
Author(s):  
Wan Li Zuo ◽  
Zhi Yan Wang ◽  
Ning Ma ◽  
Hong Liang

Accurate classification of text is a basic premise of extracting various types of information on the Web efficiently and utilizing the network resources properly. In this paper, a brand new text classification method was proposed. Consistency analysis method is a type of iterative algorithm, which mainly trains different classifiers (weak classifier) by aiming at the same training set, and then these classifiers will be gathered for testing the consistency degrees of various classification methods for the same text, thus to manifest the knowledge of each type of classifier. It main determines the weight of each sample according to the fact is the classification of each sample is accurate in each training set, as well as the accuracy of the last overall classification, and then sends the new data set whose weight has been modified to the subordinate classifier for training. In the end, the classifier gained in the training will be integrated as the final decision classifier. The classifier with consistency analysis can eliminate some unnecessary training data characteristics and place the key words on key training data. According to the experimental result, the average accuracy of this method is 91.0%, while the average recall rate is 88.1%.


2016 ◽  
Vol 2016 (4) ◽  
pp. 21-36 ◽  
Author(s):  
Tao Wang ◽  
Ian Goldberg

Abstract Website fingerprinting allows a local, passive observer monitoring a web-browsing client’s encrypted channel to determine her web activity. Previous attacks have shown that website fingerprinting could be a threat to anonymity networks such as Tor under laboratory conditions. However, there are significant differences between laboratory conditions and realistic conditions. First, in laboratory tests we collect the training data set together with the testing data set, so the training data set is fresh, but an attacker may not be able to maintain a fresh data set. Second, laboratory packet sequences correspond to a single page each, but for realistic packet sequences the split between pages is not obvious. Third, packet sequences may include background noise from other types of web traffic. These differences adversely affect website fingerprinting under realistic conditions. In this paper, we tackle these three problems to bridge the gap between laboratory and realistic conditions for website fingerprinting. We show that we can maintain a fresh training set with minimal resources. We demonstrate several classification-based techniques that allow us to split full packet sequences effectively into sequences corresponding to a single page each. We describe several new algorithms for tackling background noise. With our techniques, we are able to build the first website fingerprinting system that can operate directly on packet sequences collected in the wild.


Dose-Response ◽  
2019 ◽  
Vol 17 (4) ◽  
pp. 155932581989417 ◽  
Author(s):  
Zhi Huang ◽  
Jie Liu ◽  
Liang Luo ◽  
Pan Sheng ◽  
Biao Wang ◽  
...  

Background: Plenty of evidence has suggested that autophagy plays a crucial role in the biological processes of cancers. This study aimed to screen autophagy-related genes (ARGs) and establish a novel a scoring system for colorectal cancer (CRC). Methods: Autophagy-related genes sequencing data and the corresponding clinical data of CRC in The Cancer Genome Atlas were used as training data set. The GSE39582 data set from the Gene Expression Omnibus was used as validation set. An autophagy-related signature was developed in training set using univariate Cox analysis followed by stepwise multivariate Cox analysis and assessed in the validation set. Then we analyzed the function and pathways of ARGs using Gene Ontology and Kyoto Encyclopedia of Genes and Genomes (KEGG) database. Finally, a prognostic nomogram combining the autophagy-related risk score and clinicopathological characteristics was developed according to multivariate Cox analysis. Results: After univariate and multivariate analysis, 3 ARGs were used to construct autophagy-related signature. The KEGG pathway analyses showed several significantly enriched oncological signatures, such as p53 signaling pathway, apoptosis, human cytomegalovirus infection, platinum drug resistance, necroptosis, and ErbB signaling pathway. Patients were divided into high- and low-risk groups, and patients with high risk had significantly shorter overall survival (OS) than low-risk patients in both training set and validation set. Furthermore, the nomogram for predicting 3- and 5-year OS was established based on autophagy-based risk score and clinicopathologic factors. The area under the curve and calibration curves indicated that the nomogram showed well accuracy of prediction. Conclusions: Our proposed autophagy-based signature has important prognostic value and may provide a promising tool for the development of personalized therapy.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Tee-Ann Teo

<p><strong>Abstract.</strong> Deep Learning is a kind of Machine Learning technology which utilizing the deep neural network to learn a promising model from a large training data set. Convolutional Neural Network (CNN) has been successfully applied in image segmentation and classification with high accuracy results. The CNN applies multiple kernels (also called filters) to extract image features via image convolution. It is able to determine multiscale features through the multiple layers of convolution and pooling processes. The variety of training data plays an important role to determine a reliable CNN model. The benchmarking training data for road mark extraction is mainly focused on close-range imagery because it is easier to obtain a close-range image rather than an airborne image. For example, KITTI Vision Benchmark Suite. This study aims to transfer the road mark training data from mobile lidar system to aerial orthoimage in Fully Convolutional Networks (FCN). The transformation of the training data from ground-based system to airborne system may reduce the effort of producing a large training data set.</p><p>This study uses FCN technology and aerial orthoimage to localize road marks on the road regions. The road regions are first extracted from 2-D large-scale vector map. The input aerial orthoimage is 10&amp;thinsp;cm spatial resolution and the non-road regions are masked out before the road mark localization. The training data are road mark’s polygons, which are originally digitized from ground-based mobile lidar and prepared for the road mark extraction using mobile mapping system. This study reuses these training data and applies them for the road mark extraction using aerial orthoimage. The digitized training road marks are then transformed to road polygon based on mapping coordinates. As the detail of ground-based lidar is much better than the airborne system, the partially occulted parking lot in aerial orthoimage can also be obtained from the ground-based system. The labels (also called annotations) for FCN include road region, non-regions and road mark. The size of a training batch is 500&amp;thinsp;pixel by 500&amp;thinsp;pixel (50&amp;thinsp;m by 50&amp;thinsp;m on the ground), and the total number of training batches for training is 75 batches. After the FCN training stage, an independent aerial orthoimage (Figure 1a) is applied to predict the road marks. The results of FCN provide initial regions for road marks (Figure 1b). Usually, road marks show higher reflectance than road asphalts. Therefore, this study uses this characteristic to refine the road marks (Figure 1c) by a binary classification inside the initial road mark’s region.</p><p>To compare the automatically extracted road marks (Figure 1c) and manually digitized road marks (Figure 1d), most road marks can be extracted using the training set from ground-based system. This study also selects an area of 600&amp;thinsp;m&amp;thinsp;&amp;times;&amp;thinsp;200&amp;thinsp;m in quantitative analysis. Among the 371 reference road marks, 332 can be extracted from proposed scheme, and the completeness reached 89%. The preliminary experiment demonstrated that most road marks can be successfully extracted by the proposed scheme. Therefore, the training data from the ground-based mapping system can be utilized in airborne orthoimage in similar spatial resolution.</p>


2021 ◽  
Vol 2042 (1) ◽  
pp. 012083
Author(s):  
Christine van Stiphoudt ◽  
Florian Stinner ◽  
Gerrit Bode ◽  
Alexander Kümpel ◽  
Dirk Müller

Abstract The application of fault detection and diagnosis (FDD) algorithms in building energy management systems (BEMS) has great potential to increase the efficiency of building energy systems (BES). The usage of supervised learning algorithms requires time series depicting both nominal and component faulty behaviour for their training. In this paper, we introduce a method that automates Modelica code extension of BES models in Python with fault models to approximate real component faults. The application shows two orders of magnitude faster implementation compared to manual modelling, while no errors occur in the connections between fault and component models.


Soil Research ◽  
2000 ◽  
Vol 38 (4) ◽  
pp. 867 ◽  
Author(s):  
G. K. Summerell ◽  
T. I. Dowling ◽  
D. P. Richardson ◽  
J. Walker ◽  
B. Lees

Parna is a wind-blown clay, mobilised from inland Australia as the result of a series of intermittent high wind events during the Quaternary. Parna can be recognised on the basis of colour, texture, distributional patterns, and pedology. Parna deposits have been recorded across a wide area of south eastern Australia and have influenced the local pedology and hydrology. In some cases parna has increased soil sodicity and the potential for dryland salinisation. Predicting its spatial distribution is useful when considering agricultural potential and in assessing the risk and spatial spread of dryland salinity. Here we present the results of modelling to predict its local distribution in an area covering 291 km2 in the Young district of NSW. Two conceptual models of parna deposition and subsequent redistribution were used to develop a current parna distribution map: (a) deposition = f(topography, aspect) after assuming that interactions of rainfall, vegetation, and wind speed were relatively the same at the local scale; (b) removal or retention = f (slope angle, catchment size, slope length) as a representation of the erosive energy of gravity. Five landscape variables, elevation, aspect, slope, flow accumulation, and flow length, were derived from a 20 m digital elevation model (DEM). A training set of parna deposits was established using air photos and field survey from limited exposures in the Young district of NSW. These areas were digitised and converted to a grid of areas of parna and no-parna. This training set for parna and the 5 landscape variable grids were processed in the IDRISI for WINDOWS Geographic Information System (GIS). Spatial relationships between the parna and no-parna deposits and the 5 landscape variables were extracted from this training set. This information was imported into an inductive learning program called KnowledgeSEEKER. A decision tree was built by recursive partitioning of the data set using Chi-squares to categorise variables, and an F test for continuous variables to best replicate the training data classification of ‘parna’ and ‘no-parna’. The rules derived from this process were applied to the study area to predict the occurrence of parna in the broader landscape. Predictions were field checked and the rules adjusted until they best represented the occurrence of parna in the field. The final model showed predictions of parna deposits as follows: (i) higher elevations in the Young landscape were the dominant sites of parna deposits; (ii) thicker deposits of parna occurred on the windward south-west and north-west; (iii) thinner deposits occurred on the leeward side of a central ridge feature; (iv) because the training set concentrated around the major central ridge feature, poorer predictions were obtained on gently undulating country.


2004 ◽  
Vol 10 (8) ◽  
pp. 1137-1150 ◽  
Author(s):  
V. Crupi ◽  
E. Guglielmino ◽  
G. Milazzo

The purpose of this research is the realization of a method for machine health monitoring. The rotating machinery of the Refinery of Milazzo (Italy) was analyzed. A new procedure, incorporating neural networks, was designed and realized to evaluate the vibration signatures and recognize the fault presence. Neural networks have replaced the traditional expert systems, used in the past for the fault diagnosis, because they are a dynamic system and thus adaptable to continuously variable data. The disadvantage of common neural networks is that they need to be trained by real examples of different fault typologies. The innovative aspect of the new procedure is that it allows us to diagnose faults, which are not considered in the training set. This ability was demonstrated by our analysis; the net was able to detect the presence of imbalance and bearing wear, even if these typologies of faults were not present in the training data set.


Sign in / Sign up

Export Citation Format

Share Document