Deep Feature Representation with Stacked Sparse Auto-Encoder and Convolutional Neural Network for Hyperspectral Imaging-Based Detection of Cucumber Defects

2018 ◽  
Vol 61 (2) ◽  
pp. 425-436 ◽  
Author(s):  
Ziyi Liu ◽  
Yong He ◽  
Haiyan Cen ◽  
Renfu Lu

Abstract. It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neural network (CNN-SSAE) learning for hyperspectral imaging-based defect detection of pickling cucumbers. Hyperspectral images of normal and defective pickling cucumbers were acquired using a hyperspectral imaging system running at two conveyor speeds of 85 and 165 mm s-1. An SSAE model was developed to learn the feature representation from the preprocessed data and to perform five-class (normal, watery, split/hollow, shrivel, and surface defect) classification. To deal with a more complicated task for different types of surface defects (i.e., dirt/sand and gouge/rot classes) in six-class classification, a CNN-SSAE system was developed. The results showed that the CNN-SSAE system improved the classification performance, compared with the SSAE, with overall accuracies of 91.1% and 88.3% for six-class classification at the two conveyor speeds. Additionally, the average running time of the CNN-SSAE system for each sample was less than 14 ms, showing considerable potential for application in an automated on-line inspection system for cucumber sorting and grading. Keywords: Convolutional neural network, Defect detection, Hyperspectral imaging, Pickling cucumber, Representation learning, Stacked sparse auto-encoder.

2019 ◽  
Vol 16 (3) ◽  
pp. 172988141984299
Author(s):  
Sara Freitas ◽  
Hugo Silva ◽  
José Miguel Almeida ◽  
Eduardo Silva

This work addresses a hyperspectral imaging system for maritime surveillance using unmanned aerial vehicles. The objective was to detect the presence of vessels using purely spatial and spectral hyperspectral information. To accomplish this objective, we implemented a novel 3-D convolutional neural network approach and compared against two implementations of other state-of-the-art methods: spectral angle mapper and hyperspectral derivative anomaly detection. The hyperspectral imaging system was developed during the SUNNY project, and the methods were tested using data collected during the project final demonstration, in São Jacinto Air Force Base, Aveiro (Portugal). The obtained results show that a 3-D CNN is able to improve the recall value, depending on the class, by an interval between 27% minimum, to a maximum of over 40%, when compared to spectral angle mapper and hyperspectral derivative anomaly detection approaches. Proving that 3-D CNN deep learning techniques that combine spectral and spatial information can be used to improve the detection of targets classification accuracy in hyperspectral imaging unmanned aerial vehicles maritime surveillance applications.


2018 ◽  
Vol 19 (12) ◽  
pp. 3732 ◽  
Author(s):  
Ping Xuan ◽  
Yihua Dong ◽  
Yahong Guo ◽  
Tiangang Zhang ◽  
Yong Liu

Identification of disease-related microRNAs (disease miRNAs) is helpful for understanding and exploring the etiology and pathogenesis of diseases. Most of recent methods predict disease miRNAs by integrating the similarities and associations of miRNAs and diseases. However, these methods fail to learn the deep features of the miRNA similarities, the disease similarities, and the miRNA–disease associations. We propose a dual convolutional neural network-based method for predicting candidate disease miRNAs and refer to it as CNNDMP. CNNDMP not only exploits the similarities and associations of miRNAs and diseases, but also captures the topology structures of the miRNA and disease networks. An embedding layer is constructed by combining the biological premises about the miRNA–disease associations. A new framework based on the dual convolutional neural network is presented for extracting the deep feature representation of associations. The left part of the framework focuses on integrating the original similarities and associations of miRNAs and diseases. The novel miRNA and disease similarities which contain the topology structures are obtained by random walks on the miRNA and disease networks, and their deep features are learned by the right part of the framework. CNNDMP achieves the superior prediction performance than several state-of-the-art methods during the cross-validation process. Case studies on breast cancer, colorectal cancer and lung cancer further demonstrate CNNDMP’s powerful ability of discovering potential disease miRNAs.


2020 ◽  
Vol 10 (23) ◽  
pp. 8718
Author(s):  
Zhi-Hao Chen ◽  
Jyh-Ching Juang

To ensure safety in aircraft flying, we aimed to use deep learning methods of nondestructive examination with multiple defect detection paradigms for X-ray image detection. The use of the fast region-based convolutional neural network (Fast R-CNN)-driven model was to augment and improve the existing automated non-destructive testing (NDT) diagnosis. Within the context of X-ray screening, limited numbers and insufficient types of X-ray aeronautics engine defect data samples can, thus, pose another problem in the performance accuracy of training models tackling multiple detections. To overcome this issue, we employed a deep learning paradigm of transfer learning tackling both single and multiple detection. Overall, the achieved results obtained more than 90% accuracy based on the aeronautics engine radiographic testing inspection system net (AE-RTISNet) retrained with eight types of defect detection. Caffe structure software was used to perform network tracking detection over multiple Fast R-CNNs. We determined that the AE-RTISNet provided the best results compared with the more traditional multiple Fast R-CNN approaches, which were simple to translate to C++ code and installed in the Jetson™ TX2 embedded computer. With the use of the lightning memory-mapped database (LMDB) format, all input images were 640 × 480 pixels. The results achieved a 0.9 mean average precision (mAP) on eight types of material defect classifier problems and required approximately 100 microseconds.


2020 ◽  
Vol 10 (10) ◽  
pp. 3621
Author(s):  
Jiabin Jiang ◽  
Pin Cao ◽  
Zichen Lu ◽  
Weimin Lou ◽  
Yongying Yang

Defect detection based on machine vision and machine learning techniques has drawn much attention in recent years. Deep learning is very suitable for such segmentation and detection tasks and has become a promising research area. Surface quality inspection is essentially important in the manufacturing of mobile phone back glass (MPBG). Different types of defects are produced because of the imperfection of the manufacturing technique. Unlike general transparent glass, screen printing glass has totally different reflection and scattering characteristics, which means the traditional dark-field imaging system is not suitable for this task. Meanwhile, the imaging system requires high resolution since the minimum defect size can be 0.005 mm2. According to the imaging characteristics of screen printing glass, this paper proposes a coaxial bright-field (CBF) imaging system and low-angle bright-field (LABF) imaging system, and 8K line-scan complementary metal oxide semiconductor(CMOS) cameras are utilized to capture images with the resolution size of 16,000*8092. The CBF system is applied for the weak-scratch and discoloration defects while the LABF system is applied for the dent defect. A symmetric convolutional neural network composed of encoder and decoder structures is proposed based on U-net, which produces a semantic segmentation with the same size as the original input image. More than 10,000 original images were captured, and more than 30,000 defective and non-defective images were manually annotated in the glass surface defect dataset (GSDD). Verified by the experiments, the results showed that the average precision reaches more than 91% and the average recall rate reaches more than 95%. The method is very suitable for the surface defect inspection of screen printing mobile phone back glass.


Sign in / Sign up

Export Citation Format

Share Document