scholarly journals Defect Detection of Pandrol Track Fastener Based on Local Depth Feature Fusion Network

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Zhaomin Lv ◽  
Anqi Ma ◽  
Xingjie Chen ◽  
Shubin Zheng

There are three main problems in track fastener defect detection based on image: (1) The number of abnormal fastener pictures is scarce, and supervised learning detection model is difficult to establish. (2) The potential data features obtained by different feature extraction methods are different. Some methods focus on edge features, and some methods focus on texture features. Different features have different detection capabilities, and these features are not effectively fused and utilized. (3) The detection of the track fastener clip will be interfered by the track fastener bolt subimage. Aiming at the above three problems, a method for track fastener defects detection based on Local Deep Feature Fusion Network (LDFFN) is proposed. Firstly, the track fastener image segmentation method is used to obtain the track fastener clip subimage, which can effectively reduce the interference of bolt subimage features on the track fastener clip detection. Secondly, the edge features and texture features of track fastener clip subimages are extracted by Autoencoder (AE) and Restricted Boltzmann Machine (RBM), and the features are fused. Finally, the similarity measurement method Mahalanobis Distance (MD) is used to detect defects in track fasteners. The effectiveness of the proposed method is verified by real Pandrol track fastener images.

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Mingyu Gao ◽  
Fei Wang ◽  
Peng Song ◽  
Junyan Liu ◽  
DaWei Qi

Wood defects are quickly identified from an optical image based on deep learning methodology, which effectively improves the wood utilization. The traditional neural network technique is unemployed for the wood defect detection of optical image used, which results from a long training time, low recognition accuracy, and nonautomatic extraction of defect image features. In this paper, a wood knot defect detection model (so-called BLNN) combined deep learning is reported. Two subnetworks composed of convolutional neural networks are trained by Pytorch. By using the feature extraction capabilities of the two subnetworks and combining the bilinear join operation, the fine-grained features of the image are obtained. The experimental results show that the accuracy has reached up 99.20%, and the training time is obviously reduced with the speed of defect detection about 0.0795 s/image. It indicates that BLNN has the ability to improve the accuracy of defect recognition and has a potential application in the detection of wood knot defects.


2019 ◽  
Vol 11 (23) ◽  
pp. 2870
Author(s):  
Chu He ◽  
Qingyi Zhang ◽  
Tao Qu ◽  
Dingwen Wang ◽  
Mingsheng Liao

In the past two decades, traditional hand-crafted feature based methods and deep feature based methods have successively played the most important role in image classification. In some cases, hand-crafted features still provide better performance than deep features. This paper proposes an innovative network based on deep learning integrated with binary coding and Sinkhorn distance (DBSNet) for remote sensing and texture image classification. The statistical texture features of the image extracted by uniform local binary pattern (ULBP) are introduced as a supplement for deep features extracted by ResNet-50 to enhance the discriminability of features. After the feature fusion, both diversity and redundancy of the features have increased, thus we propose the Sinkhorn loss where an entropy regularization term plays a key role in removing redundant information and training the model quickly and efficiently. Image classification experiments are performed on two texture datasets and five remote sensing datasets. The results show that the statistical texture features of the image extracted by ULBP complement the deep features, and the new Sinkhorn loss performs better than the commonly used softmax loss. The performance of the proposed algorithm DBSNet ranks in the top three on the remote sensing datasets compared with other state-of-the-art algorithms.


Information ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 136
Author(s):  
Bhoomin Tanut ◽  
Panomkhawn Riyamongkol

This article presents a defect detection model of sugarcane plantation images. The objective is to assess the defect areas occurring in the sugarcane plantation before the harvesting seasons. The defect areas in the sugarcane are usually caused by storms and weeds. This defect detection algorithm uses high-resolution sugarcane plantations and image processing techniques. The algorithm for defect detection consists of four processes: (1) data collection, (2) image preprocessing, (3) defect detection model creation, and (4) application program creation. For feature extraction, the researchers used image segmentation and convolution filtering by 13 masks together with mean and standard deviation. The feature extraction methods generated 26 features. The K-nearest neighbors algorithm was selected to develop a model for the classification of the sugarcane areas. The color selection method was also chosen to detect defect areas. The results show that the model can recognize and classify the characteristics of the objects in sugarcane plantation images with an accuracy of 96.75%. After the comparison with the expert surveyor’s assessment, the accurate relevance obtained was 92.95%. Therefore, the proposed model can be used as a tool to calculate the percentage of defect areas and solve the problem of evaluating errors of yields in the future.


2020 ◽  
Vol 13 (4) ◽  
pp. 557-571
Author(s):  
Kasthuri Anburajan ◽  
Suruliandi Andavar ◽  
Poongothai Elango

Background: Face annotation is the naming procedure to assign the correct name of a person who has emerged on an image. Objective: The main objective of this paper was to compare and evaluate six feature extraction techniques for face annotation under real-time challenging images and to find the best suitable feature for face annotation. Method: From literature review, it has been observed that Name Semantic Network (NSN) outperforms other annotation methods for various unconditioned images as well as ambiguous tags. However, the NSN’s performance can differ with various feature extraction techniques. Hence, its success is influenced by the feature extraction techniques that are used. Therefore, in this work, the NSN’s performance is experimented and evaluated with various feature extraction methods such as the Discrete Cosine Transform Local Binary Pattern (DCT-LBP), Discrete Fourier Transform Local Binary Pattern (DFT-LBP), Local Patterns of Gradients (LPOG), Gist, Local Order-constrained Gradient Orientations (LOGO) and Convolutional Neural Networks (CNNs) deep features. Results: Different feature extraction approaches demonstrate variations in performance with respect to a range of difficulties in face annotation using the Yahoo, LFW and IMFDB databases. The experimental results show that the deep feature method can achieve better recognition rate other than texture features. It confronts several issues in the presentation of a face in an image and produces better results. Conclusion: It is concluded that the CNNs deep feature is the best feature extraction technique that offers enhanced performance for face annotation.


Author(s):  
Bixin Cai ◽  
Qidong Wang ◽  
Wuwei Chen ◽  
Linfeng Zhao ◽  
Huiran Wang

Vehicle detection plays a crucial role in the decision-making, planning, and control of intelligent vehicles. It is one of the main tasks of environmental perception and an essential part of ensuring driving safety. In order to capture unique vehicle features and improve vehicle recognition efficiency, this paper fuses texture features of image and edge features of LIDAR to detect frontal vehicle targets. First, we use wavelet analysis and geometric analysis to segment the ground and determine the region of interest for vehicle detection. Then, the point cloud of the vehicle detected is projected into the image to locate the ROI. Moreover, the edge feature of the vehicle is guided to extract according to the maximum gradient direction of the vehicle’s rear contour. Furthermore, the Haar texture feature is integrated to identify the vehicle, and a filter is designed according to the point cloud’s spatial distribution to eliminate the error targets. Finally, it is verified by real-vehicle comparison tests that the proposed fusion method can effectively improve the vehicles’ detection with not much time.


2021 ◽  
pp. 1-1
Author(s):  
Zishu Gao ◽  
Guodong Yang ◽  
En Li ◽  
Zize Liang

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 26138-26146
Author(s):  
Xue Ni ◽  
Huali Wang ◽  
Fan Meng ◽  
Jing Hu ◽  
Changkai Tong
Keyword(s):  

Author(s):  
Zhenying Xu ◽  
Ziqian Wu ◽  
Wei Fan

Defect detection of electromagnetic luminescence (EL) cells is the core step in the production and preparation of solar cell modules to ensure conversion efficiency and long service life of batteries. However, due to the lack of feature extraction capability for small feature defects, the traditional single shot multibox detector (SSD) algorithm performs not well in EL defect detection with high accuracy. Consequently, an improved SSD algorithm with modification in feature fusion in the framework of deep learning is proposed to improve the recognition rate of EL multi-class defects. A dataset containing images with four different types of defects through rotation, denoising, and binarization is established for the EL. The proposed algorithm can greatly improve the detection accuracy of the small-scale defect with the idea of feature pyramid networks. An experimental study on the detection of the EL defects shows the effectiveness of the proposed algorithm. Moreover, a comparison study shows the proposed method outperforms other traditional detection methods, such as the SIFT, Faster R-CNN, and YOLOv3, in detecting the EL defect.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1668
Author(s):  
Zongming Dai ◽  
Kai Hu ◽  
Jie Xie ◽  
Shengyu Shen ◽  
Jie Zheng ◽  
...  

Traditional co-word networks do not discriminate keywords of researcher interest from general keywords. Co-word networks are therefore often too general to provide knowledge if interest to domain experts. Inspired by the recent work that uses an automatic method to identify the questions of interest to researchers like “problems” and “solutions”, we try to answer a similar question “what sensors can be used for what kind of applications”, which is great interest in sensor- related fields. By generalizing the specific questions as “questions of interest”, we built a knowledge network considering researcher interest, called bipartite network of interest (BNOI). Different from a co-word approaches using accurate keywords from a list, BNOI uses classification models to find possible entities of interest. A total of nine feature extraction methods including N-grams, Word2Vec, BERT, etc. were used to extract features to train the classification models, including naïve Bayes (NB), support vector machines (SVM) and logistic regression (LR). In addition, a multi-feature fusion strategy and a voting principle (VP) method are applied to assemble the capability of the features and the classification models. Using the abstract text data of 350 remote sensing articles, features are extracted and the models trained. The experiment results show that after removing the biased words and using the ten-fold cross-validation method, the F-measure of “sensors” and “applications” are 93.2% and 85.5%, respectively. It is thus demonstrated that researcher questions of interest can be better answered by the constructed BNOI based on classification results, comparedwith the traditional co-word network approach.


Sign in / Sign up

Export Citation Format

Share Document