Identification of Fiducial Points in Serial Data

1991 ◽  
Vol 113 (1) ◽  
pp. 178-183
Author(s):  
D. M. Auslander ◽  
J. C. Griffin ◽  
A. Mayya

A method is described for fiducial point identification that can be tuned to specific data types using training set data having manually marked fiducial points. The role of the “expert” in the training process is limited to providing the correct point identification in the training data. No articulation of the mathematical justification for the choice is needed. The method is based on the calculation of a weighted score for each point in an unknown data record. The score is derived from a doubly normalized computation of values for a set of generic discriminant functions. Candidate points are identified by an order and selection process. Because of the multiple normalization and use of sorting for selection, the method is independent of scale or range of the data to be identified. Neither the training process nor the identification process requires any dimensional input, other than the identification of fiducial points for use in the training process. Examples are given using cardiac electrogram data and ultrasonic data.

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 192
Author(s):  
Meirong Wei ◽  
Yan Liu ◽  
Tao Zhang ◽  
Ze Wang ◽  
Jiaming Zhu

Convolution neural network (CNN)-based fault diagnosis methods have been widely adopted to obtain representative features and used to classify fault modes due to their prominent feature extraction capability. However, a large number of labeled samples are required to support the algorithm of CNNs, and, in the case of a limited amount of labeled samples, this may lead to overfitting. In this article, a novel ResNet-based method is developed to achieve fault diagnoses for machines with very few samples. To be specific, data transformation combinations (DTCs) are designed based on mutual information. It is worth noting that the selected DTC, which can complete the training process of the 1-D ResNet quickly without increasing the amount of training data, can be randomly used for any batch training data. Meanwhile, a self-supervised learning method called 1-D SimCLR is adopted to obtain an effective feature encoder, which can be optimized with very few unlabeled samples. Then, a fault diagnosis model named DTC-SimCLR is constructed by combining the selected data transformation combination, the obtained feature encoder and a fully-connected layer-based classifier. In DTC-SimCLR, the parameters of the feature encoder are fixed, and the classifier is trained with very few labeled samples. Two machine fault datasets from a cutting tooth and a bearing are conducted to evaluate the performance of DTC-SimCLR. Testing results show that DTC-SimCLR has superior performance and diagnostic accuracy with very few samples.


2001 ◽  
Vol 17 (1) ◽  
pp. 48-55 ◽  
Author(s):  
Juan Botella ◽  
María José Contreras ◽  
Pei-Chun Shih ◽  
Víctor Rubio

Summary: Deterioration in performance associated with decreased ability to sustain attention may be found in long and tedious task sessions. The necessity for assessing a number of psychological dimensions in a single session often demands “short” tests capable of assessing individual differences in abilities such as vigilance and maintenance of high performance levels. In the present paper two tasks were selected as candidates for playing this role, the Abbreviated Vigilance Task (AVT) by Temple, Warm, Dember, LaGrange and Matthews (1996) and the Continuous Attention Test (CAT) by Tiplady (1992) . However, when applied to a sample of 829 candidates in a job-selection process for air-traffic controllers, neither of them showed discriminative capacity. In a second study, an extended version of the CAT was applied to a similar sample of 667 subjects, but also proved incapable of properly detecting individual differences. In short, at least in a selection context such as that studied here, neither of the tasks appeared appropriate for playing the role of a “short” test for discriminating individual differences in performance deterioration in sustained attention.


Author(s):  
Annapoorani Gopal ◽  
Lathaselvi Gandhimaruthian ◽  
Javid Ali

The Deep Neural Networks have gained prominence in the biomedical domain, becoming the most commonly used networks after machine learning technology. Mammograms can be used to detect breast cancers with high precision with the help of Convolutional Neural Network (CNN) which is deep learning technology. An exhaustive labeled data is required to train the CNN from scratch. This can be overcome by deploying Generative Adversarial Network (GAN) which comparatively needs lesser training data during a mammogram screening. In the proposed study, the application of GANs in estimating breast density, high-resolution mammogram synthesis for clustered microcalcification analysis, effective segmentation of breast tumor, analysis of the shape of breast tumor, extraction of features and augmentation of the image during mammogram classification have been extensively reviewed.


Author(s):  
Kathleen Jeffs

This chapter asks the questions: ‘what is the Spanish Golden Age and why should we stage its plays now?’ The Royal Shakespeare Company (RSC) Spanish season of 2004–5 came at a particularly ripe time for Golden Age plays to enter the public consciousness. This chapter introduces the Golden Age period and authors whose works were chosen for the season, and the performance traditions from the corrales of Spain to festivals in the United States. The chapter then treats the decision taken by the RSC to initiate a Golden Age season, delves into the play-selection process, and discusses the role of the literal translator in this first step towards a season. Then the chapter looks at ‘the ones that got away’, the plays that almost made the cut for production, and other worthy scripts from this period that deserve consideration for future productions.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Author(s):  
Deepthi Kolady ◽  
Weiwei Zhang ◽  
Tong Wang ◽  
Jessica Ulrich-Schad

Abstract This study uses location-specific data to investigate the role of spatially mediated peer effects in farmers’ adoption of conservation agriculture practices. The literature has shown that farmers trust other farmers and one way to increase conservation practice adoption is through identifying feasible conservation practices in neighboring fields. Estimating this effect can help improve our understanding of what influences the adoption and could play a role in improving federal and local conservation program design. The study finds that although spatial peer effects are important in the adoption of conservation tillage and diverse crop rotation, the scale of peer effects are not substantial.


Author(s):  
Sina Shaffiee Haghshenas ◽  
Behrouz Pirouz ◽  
Sami Shaffiee Haghshenas ◽  
Behzad Pirouz ◽  
Patrizia Piro ◽  
...  

Nowadays, an infectious disease outbreak is considered one of the most destructive effects in the sustainable development process. The outbreak of new coronavirus (COVID-19) as an infectious disease showed that it has undesirable social, environmental, and economic impacts, and leads to serious challenges and threats. Additionally, investigating the prioritization parameters is of vital importance to reducing the negative impacts of this global crisis. Hence, the main aim of this study is to prioritize and analyze the role of certain environmental parameters. For this purpose, four cities in Italy were selected as a case study and some notable climate parameters—such as daily average temperature, relative humidity, wind speed—and an urban parameter, population density, were considered as input data set, with confirmed cases of COVID-19 being the output dataset. In this paper, two artificial intelligence techniques, including an artificial neural network (ANN) based on particle swarm optimization (PSO) algorithm and differential evolution (DE) algorithm, were used for prioritizing climate and urban parameters. The analysis is based on the feature selection process and then the obtained results from the proposed models compared to select the best one. Finally, the difference in cost function was about 0.0001 between the performances of the two models, hence, the two methods were not different in cost function, however, ANN-PSO was found to be better, because it reached to the desired precision level in lesser iterations than ANN-DE. In addition, the priority of two variables, urban parameter, and relative humidity, were the highest to predict the confirmed cases of COVID-19.


2020 ◽  
Vol 12 (7) ◽  
pp. 1218
Author(s):  
Laura Tuşa ◽  
Mahdi Khodadadzadeh ◽  
Cecilia Contreras ◽  
Kasra Rafiezadeh Shahi ◽  
Margret Fuchs ◽  
...  

Due to the extensive drilling performed every year in exploration campaigns for the discovery and evaluation of ore deposits, drill-core mapping is becoming an essential step. While valuable mineralogical information is extracted during core logging by on-site geologists, the process is time consuming and dependent on the observer and individual background. Hyperspectral short-wave infrared (SWIR) data is used in the mining industry as a tool to complement traditional logging techniques and to provide a rapid and non-invasive analytical method for mineralogical characterization. Additionally, Scanning Electron Microscopy-based image analyses using a Mineral Liberation Analyser (SEM-MLA) provide exhaustive high-resolution mineralogical maps, but can only be performed on small areas of the drill-cores. We propose to use machine learning algorithms to combine the two data types and upscale the quantitative SEM-MLA mineralogical data to drill-core scale. This way, quasi-quantitative maps over entire drill-core samples are obtained. Our upscaling approach increases result transparency and reproducibility by employing physical-based data acquisition (hyperspectral imaging) combined with mathematical models (machine learning). The procedure is tested on 5 drill-core samples with varying training data using random forests, support vector machines and neural network regression models. The obtained mineral abundance maps are further used for the extraction of mineralogical parameters such as mineral association.


Author(s):  
Wael H. Awad ◽  
Bruce N. Janson

Three different modeling approaches were applied to explain truck accidents at interchanges in Washington State during a 27-month period. Three models were developed for each ramp type including linear regression, neural networks, and a hybrid system using fuzzy logic and neural networks. The study showed that linear regression was able to predict accident frequencies that fell within one standard deviation from the overall mean of the dependent variable. However, the coefficient of determination was very low in all cases. The other two artificial intelligence (AI) approaches showed a high level of performance in identifying different patterns of accidents in the training data and presented a better fit when compared to the regression model. However, the ability of these AI models to predict test data that were not included in the training process showed unsatisfactory results.


Author(s):  
F. ROLI ◽  
S. B. SERPICO ◽  
G. VERNAZZA

This paper presents a methodology for integrating connectionist and symbolic approaches to 2D image recognition. The proposed integration paradigm exploits the synergy of the two approaches for both the training and the recognition phases of an image recognition system. In the training phase, a symbolic module provides an approximate solution to a given image-recognition problem in terms of symbolic models. Such models are hierarchically organized into different abstraction levels, and include contextual descriptions. After mapping such models into a complex neural architecture, a neural training process is carried out to optimize the solution of the recognition problem. The so-obtained neural networks are used during the recognition phase for pattern classification. In this phase, the role of symbolic modules consists of managing complex aspects of information processing: abstraction levels, contextual information, and global recognition hypotheses. A hybrid system implementing the proposed integration paradigm is presented, and its advantages over single approaches are assessed. Results on Magnetic Resonance image recognition are reported, and comparisons with some well-known classifiers are made.


Sign in / Sign up

Export Citation Format

Share Document