Development of an On-Line Rotor Crack Detection and Monitoring System

1989 ◽  
Vol 111 (3) ◽  
pp. 241-250 ◽  
Author(s):  
I. Imam ◽  
S. H. Azzaro ◽  
R. J. Bankert ◽  
J. Scheibel

A very comprehensive technology of on-line rotor crack detection and monitoring has been developed. The technique, based on the vibration signature analysis (VSA) approach, can detect incipient transverse rotor cracks in an “on-line mode.” The technique is generic and is applicable to all machines whose rotors are subjected to some kind of bending load. These machines include turbines, generators, pumps and motors, etc. The technique is based on the analytical modeling of the dynamics of the system. The basic idea is that through the modeling approach, the crack symptoms can be determined in terms of characteristic vibration signatures. These signatures are then used to diagnose the flaw in real life situations. A 3-D finite element crack model and a nonlinear rotor dynamic code have also been developed to accurately model a cracked rotor system. This program has been used to develop a variety of unique vibration signatures indicating a rotor crack. Both the analytical crack model and the crack signature analysis techniques have been experimentally validated. A microprocessor-based on-line rotor crack detection and monitoring system has been developed. The system has successfully detected cracks of the order of 1 to 2 percent of shaft diameter deep in an “on-line” mode in a series of large-scale laboratory tests. The system has been installed on a turbine-generator set at a utility in the field in October 1986 and has since been operating continuously, both in on-line as well as in coast-down modes, essentially, flawlessly. The system has also been applied in a crack detection program for nuclear reactor vertical coolant pumps. This paper describes all aspects of the development, starting from the technical concept to the commercial field applications.

2005 ◽  
Vol 6-8 ◽  
pp. 809-816 ◽  
Author(s):  
Johan De Keuster ◽  
Joost R. Duflou ◽  
Jean Pierre Kruth

Laser cutting is a well-established sheet metal processing method. Nowadays a trend towards the cutting of thick plates (> 15 mm) can be observed. However for these thick plates the process window in which good cutting results can be obtained is more narrow compared to that for thin sheets due to the very difficult balance to be found between the different process parameters. Even after determination of the process window, a good cutting quality cannot always be guaranteed. Therefore cutting of thick plates is still characterized by a large scrap percentage, which impedes a breakthrough to large scale industrial use. A solution to this problem is to incorporate a sensor system in the laser cutting machine that monitors the cut quality on-line. This monitoring system could then be integrated in a process control system, which adapts the process parameters in function of the observed cut quality in real time. In this way a good cut quality could always be guaranteed. In this study, the first step in this direction, the determination of an appropriate monitoring system, is dealt with. The applicability for monitoring purposes of two types of sensors is investigated: a microphone and a photodiode. For both types, correlation between the sensor output and the cut quality is investigated in a qualitative way. The scope of the reported research was not limited to contour cutting, also piercing is covered in the study.


2020 ◽  
Vol 15 (7) ◽  
pp. 750-757
Author(s):  
Jihong Wang ◽  
Yue Shi ◽  
Xiaodan Wang ◽  
Huiyou Chang

Background: At present, using computer methods to predict drug-target interactions (DTIs) is a very important step in the discovery of new drugs and drug relocation processes. The potential DTIs identified by machine learning methods can provide guidance in biochemical or clinical experiments. Objective: The goal of this article is to combine the latest network representation learning methods for drug-target prediction research, improve model prediction capabilities, and promote new drug development. Methods: We use large-scale information network embedding (LINE) method to extract network topology features of drugs, targets, diseases, etc., integrate features obtained from heterogeneous networks, construct binary classification samples, and use random forest (RF) method to predict DTIs. Results: The experiments in this paper compare the common classifiers of RF, LR, and SVM, as well as the typical network representation learning methods of LINE, Node2Vec, and DeepWalk. It can be seen that the combined method LINE-RF achieves the best results, reaching an AUC of 0.9349 and an AUPR of 0.9016. Conclusion: The learning method based on LINE network can effectively learn drugs, targets, diseases and other hidden features from the network topology. The combination of features learned through multiple networks can enhance the expression ability. RF is an effective method of supervised learning. Therefore, the Line-RF combination method is a widely applicable method.


2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.


Author(s):  
Krzysztof Jurczuk ◽  
Marcin Czajkowski ◽  
Marek Kretowski

AbstractThis paper concerns the evolutionary induction of decision trees (DT) for large-scale data. Such a global approach is one of the alternatives to the top-down inducers. It searches for the tree structure and tests simultaneously and thus gives improvements in the prediction and size of resulting classifiers in many situations. However, it is the population-based and iterative approach that can be too computationally demanding to apply for big data mining directly. The paper demonstrates that this barrier can be overcome by smart distributed/parallel processing. Moreover, we ask the question whether the global approach can truly compete with the greedy systems for large-scale data. For this purpose, we propose a novel multi-GPU approach. It incorporates the knowledge of global DT induction and evolutionary algorithm parallelization together with efficient utilization of memory and computing GPU’s resources. The searches for the tree structure and tests are performed simultaneously on a CPU, while the fitness calculations are delegated to GPUs. Data-parallel decomposition strategy and CUDA framework are applied. Experimental validation is performed on both artificial and real-life datasets. In both cases, the obtained acceleration is very satisfactory. The solution is able to process even billions of instances in a few hours on a single workstation equipped with 4 GPUs. The impact of data characteristics (size and dimension) on convergence and speedup of the evolutionary search is also shown. When the number of GPUs grows, nearly linear scalability is observed what suggests that data size boundaries for evolutionary DT mining are fading.


Author(s):  
Gianluca Bardaro ◽  
Alessio Antonini ◽  
Enrico Motta

AbstractOver the last two decades, several deployments of robots for in-house assistance of older adults have been trialled. However, these solutions are mostly prototypes and remain unused in real-life scenarios. In this work, we review the historical and current landscape of the field, to try and understand why robots have yet to succeed as personal assistants in daily life. Our analysis focuses on two complementary aspects: the capabilities of the physical platform and the logic of the deployment. The former analysis shows regularities in hardware configurations and functionalities, leading to the definition of a set of six application-level capabilities (exploration, identification, remote control, communication, manipulation, and digital situatedness). The latter focuses on the impact of robots on the daily life of users and categorises the deployment of robots for healthcare interventions using three types of services: support, mitigation, and response. Our investigation reveals that the value of healthcare interventions is limited by a stagnation of functionalities and a disconnection between the robotic platform and the design of the intervention. To address this issue, we propose a novel co-design toolkit, which uses an ecological framework for robot interventions in the healthcare domain. Our approach connects robot capabilities with known geriatric factors, to create a holistic view encompassing both the physical platform and the logic of the deployment. As a case study-based validation, we discuss the use of the toolkit in the pre-design of the robotic platform for an pilot intervention, part of the EU large-scale pilot of the EU H2020 GATEKEEPER project.


Sign in / Sign up

Export Citation Format

Share Document