scholarly journals A Weld Joint Type Identification Method for Visual Sensor Based on Image Features and SVM

Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 471 ◽  
Author(s):  
Jiang Zeng ◽  
Guang-Zhong Cao ◽  
Ye-Ping Peng ◽  
Su-Dan Huang

In the field of welding robotics, visual sensors, which are mainly composed of a camera and a laser, have proven to be promising devices because of their high precision, good stability, and high safety factor. In real welding environments, there are various kinds of weld joints due to the diversity of the workpieces. The location algorithms for different weld joint types are different, and the welding parameters applied in welding are also different. It is very inefficient to manually change the image processing algorithm and welding parameters according to the weld joint type before each welding task. Therefore, it will greatly improve the efficiency and automation of the welding system if a visual sensor can automatically identify the weld joint before welding. However, there are few studies regarding these problems and the accuracy and applicability of existing methods are not strong. Therefore, a weld joint identification method for visual sensor based on image features and support vector machine (SVM) is proposed in this paper. The deformation of laser around a weld joint is taken as recognition information. Two kinds of features are extracted as feature vectors to enrich the identification information. Subsequently, based on the extracted feature vectors, the optimal SVM model for weld joint type identification is established. A comparative study of proposed and conventional strategies for weld joint identification is carried out via a contrast experiment and a robustness testing experiment. The experimental results show that the identification accuracy rate achieves 98.4%. The validity and robustness of the proposed method are verified.

2021 ◽  
Vol 13 (15) ◽  
pp. 2901
Author(s):  
Zhiqiang Zeng ◽  
Jinping Sun ◽  
Congan Xu ◽  
Haiyang Wang

Recently, deep learning (DL) has been successfully applied in automatic target recognition (ATR) tasks of synthetic aperture radar (SAR) images. However, limited by the lack of SAR image target datasets and the high cost of labeling, these existing DL based approaches can only accurately recognize the target in the training dataset. Therefore, high precision identification of unknown SAR targets in practical applications is one of the important capabilities that the SAR–ATR system should equip. To this end, we propose a novel DL based identification method for unknown SAR targets with joint discrimination. First of all, the feature extraction network (FEN) trained on a limited dataset is used to extract the SAR target features, and then the unknown targets are roughly identified from the known targets by computing the Kullback–Leibler divergence (KLD) of the target feature vectors. For the targets that cannot be distinguished by KLD, their feature vectors perform t-distributed stochastic neighbor embedding (t-SNE) dimensionality reduction processing to calculate the relative position angle (RPA). Finally, the known and unknown targets are finely identified based on RPA. Experimental results conducted on the MSTAR dataset demonstrate that the proposed method can achieve higher identification accuracy of unknown SAR targets than existing methods while maintaining high recognition accuracy of known targets.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 828
Author(s):  
Wai Lun Lo ◽  
Henry Shu Hung Chung ◽  
Hong Fu

Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the experimental evaluation of a Particle Swarm Optimization (PSO) based transfer learning method for meteorological visibility estimation method. This paper proposes a modified approach of the transfer learning method for visibility estimation by using PSO feature selection. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide information of static landmark objects for automatic extraction of effective regions from images. Effective regions are then extracted from image database and the image features are then extracted from the Neural Network. Subset of Image features are selected based on the Particle Swarming Optimization (PSO) methods to obtain the image feature vectors for each effective sub-region. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and robust.


Forests ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 1527
Author(s):  
Xi Pan ◽  
Kang Li ◽  
Zhangjing Chen ◽  
Zhong Yang

Identifying wood accurately and rapidly is one of the best ways to prevent wood product fakes and adulterants in forestry products. Wood identification traditionally relies heavily on special experts that spend extensive time in the laboratory. A new method is proposed that uses near-infrared (NIR) spectra at a wavelength of 780–2300 nm incorporated with the gray-level co-occurrence (GLCM) texture feature to accurately and rapidly identify timbers. The NIR spectral features were determined by principal component analysis (PCA), and the digital image features extracted with the GLCM were used to create a support vector machine (SVM) model to identify the timbers. The results from fusion features of raw spectra and four GLCM features of 25 timbers showed that identification accuracy by the model was 99.43%. A sample anisotropy and heterogeneity comparative analysis revealed that the wood identification information from the transverse surface had more characteristics than that from the tangential and radial surfaces. Furthermore, short-wavelength pre-processed NIR bands of 780–1100 nm and 1100–2300 nm realized high identification accuracy of 99.43% and 100%, respectively. The four GLCM features were effective for improving identification accuracy by improving the data spatial clustering features.


Algorithms ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 271 ◽  
Author(s):  
Yuntian Feng ◽  
Guoliang Wang ◽  
Zhipeng Liu ◽  
Runming Feng ◽  
Xiang Chen ◽  
...  

Aiming at the current problem that it is difficult to deal with an unknown radar emitter in the radar emitter identification process, we propose an unknown radar emitter identification method based on semi-supervised and transfer learning. Firstly, we construct the support vector machine (SVM) model based on transfer learning, using the information of labeled samples in the source domain to train in the target domain, which can solve the problem that the training data and the testing data do not satisfy the same-distribution hypothesis. Then, we design a semi-supervised co-training algorithm using the information of unlabeled samples to enhance the training effect, which can solve the problem that insufficient labeled data results in inadequate training of the classifier. Finally, we combine the transfer learning method with the semi-supervised learning method for the unknown radar emitter identification task. Simulation experiments show that the proposed method can effectively identify an unknown radar emitter and still maintain high identification accuracy within a certain measurement error range.


Information ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 15 ◽  
Author(s):  
Yuntian Feng ◽  
Yanjie Cheng ◽  
Guoliang Wang ◽  
Xiong Xu ◽  
Hui Han ◽  
...  

At present, there are two main problems in the commonly used radar emitter identification methods. First, when the distribution of training data and testing data is quite different, the identification accuracy is low. Second, the traditional identification methods usually include an offline training stage and online identifying stage, which cannot achieve the real-time identification of the radar emitter. Aimed at the above problems, this paper proposes a radar emitter identification method based on transfer learning and online learning. First, for the case where the target domain contains only a small number of labeled samples, the TrAdaBoost method is used as the basic learning framework to train a support vector machine, which can obtain useful knowledge from the source domain to aid in the identification of the target domain. Then, for the case where the target domain does not contain labeled samples, the Expectation-Maximization algorithm is used to filter the unlabeled samples in the target domain to generate the available training data. Finally, to make the identification quickly and accurately, we propose a radar emitter identification method, based on online learning to ensure real-time updating of the model. Simulation experiments show that the proposed method, based on transfer learning and online learning, has higher identification accuracy and good timeliness.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Xueyong Liu ◽  
Mei Li ◽  
Wei Tang ◽  
Shichao Wang ◽  
Xiong Wu

Infrasound is a type of low frequency signal that occurs in nature and results from man-made events, typically ranging in frequency from 0.01 Hz to 20 Hz. In this paper, a classification method based on Hilbert-Huang transform (HHT) and support vector machine (SVM) is proposed to discriminate between three different natural events. The frequency spectrum characteristics of infrasound signals produced by different events, such as volcanoes, are unique, which lays the foundation for infrasound signal classification. First, the HHT method was used to extract the feature vectors of several kinds of infrasound events from the Hilbert marginal spectrum. Then, the feature vectors were classified by the SVM method. Finally, the present of classification and identification accuracy are given. The simulation results show that the recognition rate is above 97.7%, and that approach is effective for classifying event types for small samples.


Author(s):  
Ryoichi ISAWA ◽  
Tao BAN ◽  
Shanqing GUO ◽  
Daisuke INOUE ◽  
Koji NAKAO

Medicina ◽  
2021 ◽  
Vol 57 (6) ◽  
pp. 527
Author(s):  
Vijay Vyas Vadhiraj ◽  
Andrew Simpkin ◽  
James O’Connell ◽  
Naykky Singh Singh Ospina ◽  
Spyridoula Maraka ◽  
...  

Background and Objectives: Thyroid nodules are lumps of solid or liquid-filled tumors that form inside the thyroid gland, which can be malignant or benign. Our aim was to test whether the described features of the Thyroid Imaging Reporting and Data System (TI-RADS) could improve radiologists’ decision making when integrated into a computer system. In this study, we developed a computer-aided diagnosis system integrated into multiple-instance learning (MIL) that would focus on benign–malignant classification. Data were available from the Universidad Nacional de Colombia. Materials and Methods: There were 99 cases (33 Benign and 66 malignant). In this study, the median filter and image binarization were used for image pre-processing and segmentation. The grey level co-occurrence matrix (GLCM) was used to extract seven ultrasound image features. These data were divided into 87% training and 13% validation sets. We compared the support vector machine (SVM) and artificial neural network (ANN) classification algorithms based on their accuracy score, sensitivity, and specificity. The outcome measure was whether the thyroid nodule was benign or malignant. We also developed a graphic user interface (GUI) to display the image features that would help radiologists with decision making. Results: ANN and SVM achieved an accuracy of 75% and 96% respectively. SVM outperformed all the other models on all performance metrics, achieving higher accuracy, sensitivity, and specificity score. Conclusions: Our study suggests promising results from MIL in thyroid cancer detection. Further testing with external data is required before our classification model can be employed in practice.


2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Alexander Knyshov ◽  
Samantha Hoang ◽  
Christiane Weirauch

Abstract Automated insect identification systems have been explored for more than two decades but have only recently started to take advantage of powerful and versatile convolutional neural networks (CNNs). While typical CNN applications still require large training image datasets with hundreds of images per taxon, pretrained CNNs recently have been shown to be highly accurate, while being trained on much smaller datasets. We here evaluate the performance of CNN-based machine learning approaches in identifying three curated species-level dorsal habitus datasets for Miridae, the plant bugs. Miridae are of economic importance, but species-level identifications are challenging and typically rely on information other than dorsal habitus (e.g., host plants, locality, genitalic structures). Each dataset contained 2–6 species and 126–246 images in total, with a mean of only 32 images per species for the most difficult dataset. We find that closely related species of plant bugs can be identified with 80–90% accuracy based on their dorsal habitus alone. The pretrained CNN performed 10–20% better than a taxon expert who had access to the same dorsal habitus images. We find that feature extraction protocols (selection and combination of blocks of CNN layers) impact identification accuracy much more than the classifying mechanism (support vector machine and deep neural network classifiers). While our network has much lower accuracy on photographs of live insects (62%), overall results confirm that a pretrained CNN can be straightforwardly adapted to collection-based images for a new taxonomic group and successfully extract relevant features to classify insect species.


Sign in / Sign up

Export Citation Format

Share Document