scholarly journals Chassis Assembly Detection and Identification Based on Deep Learning Component Instance Segmentation

Symmetry ◽  
2019 ◽  
Vol 11 (8) ◽  
pp. 1001 ◽  
Author(s):  
Guixiong Liu ◽  
Binyuan He ◽  
Siyuang Liu ◽  
Jian Huang

Chassis assembly quality is a necessary step to improve product quality and yield. In recent years, with the continuous expansion of deep learning method, its application in product quality detection is increasingly extensive. The current limitations and shortcomings of existing quality detection methods and the feasibility of improving the deep learning method in quality detection are presented and discussed in this paper. According to the characteristics of numerous parts and complex types of chassis assembly components, a method for chassis assembly detection and identification based on deep learning component segmentation is proposed. In the proposed method, assembly quality detection is first performed using the Mask regional convolutional neural network component instance segmentation method, which reduces the influence of complex illumination conditions and background detection. Next, a standard dictionary of chassis assembly is built, which is connected with Mask R-CNN in a cascading way. The component mask is obtained through the detection result, and the component category and assembly quality information is extracted to realize chassis assembly detection and identification. To evaluate the proposed method, an industrial assembly chassis was used to create datasets, and the method is effective in limited data sets of industrial assembly chassis. The experimental results indicate that the accuracy of the proposed method can reach 93.7%. Overall, the deep learning method realizes complete automation of chassis assembly detection.

Author(s):  
Xi Li ◽  
Ting Wang ◽  
Shexiong Wang

It draws researchers’ attentions how to make use of the log data effectively without paying much for storing them. In this paper, we propose pattern-based deep learning method to extract the features from log datasets and to facilitate its further use at the reasonable expense of the storage performances. By taking the advantages of the neural network and thoughts to combine statistical features with experts’ knowledge, there are satisfactory results in the experiments on some specified datasets and on the routine systems that our group maintains. Processed on testing data sets, the model is 5%, at least, more likely to outperform its competitors in accuracy perspective. More importantly, its schema unveils a new way to mingle experts’ experiences with statistical log parser.


mSystems ◽  
2020 ◽  
Vol 5 (1) ◽  
Author(s):  
Hao Jiang ◽  
Sen Li ◽  
Weihuang Liu ◽  
Hongjin Zheng ◽  
Jinghao Liu ◽  
...  

ABSTRACT Analyzing cells and tissues under a microscope is a cornerstone of biological research and clinical practice. However, the challenge faced by conventional microscopy image analysis is the fact that cell recognition through a microscope is still time-consuming and lacks both accuracy and consistency. Despite enormous progress in computer-aided microscopy cell detection, especially with recent deep-learning-based techniques, it is still difficult to translate an established method directly to a new cell target without extensive modification. The morphology of a cell is complex and highly varied, but it has long been known that cells show a nonrandom geometrical order in which a distinct and defined shape can be formed in a given type of cell. Thus, we have proposed a geometry-aware deep-learning method, geometric-feature spectrum ExtremeNet (GFS-ExtremeNet), for cell detection. GFS-ExtremeNet is built on the framework of ExtremeNet with a collection of geometric features, resulting in the accurate detection of any given cell target. We obtained promising detection results with microscopic images of publicly available mammalian cell nuclei and newly collected protozoa, whose cell shapes and sizes varied. Even more striking, our method was able to detect unicellular parasites within red blood cells without misdiagnosis of each other. IMPORTANCE Automated diagnostic microscopy powered by deep learning is useful, particularly in rural areas. However, there is no general method for object detection of different cells. In this study, we developed GFS-ExtremeNet, a geometry-aware deep-learning method which is based on the detection of four extreme key points for each object (topmost, bottommost, rightmost, and leftmost) and its center point. A postprocessing step, namely, adjacency spectrum, was employed to measure whether the distances between the key points were below a certain threshold for a particular cell candidate. Our newly proposed geometry-aware deep-learning method outperformed other conventional object detection methods and could be applied to any type of cell with a certain geometrical order. Our GFS-ExtremeNet approach opens a new window for the development of an automated cell detection system.


2020 ◽  
Vol 178 ◽  
pp. 105736
Author(s):  
Isaac Pérez-Borrero ◽  
Diego Marín-Santos ◽  
Manuel E. Gegúndez-Arias ◽  
Estefanía Cortés-Ancos

Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 886 ◽  
Author(s):  
Zhixian Yang ◽  
Ruixia Dong ◽  
Hao Xu ◽  
Jinan Gu

Object-detection methods based on deep learning play an important role in achieving machine automation. In order to achieve fast and accurate autonomous detection of stacked electronic components, an instance segmentation method based on an improved Mask R-CNN algorithm was proposed. By optimizing the feature extraction network, the performance of Mask R-CNN was improved. A dataset of electronic components containing 1200 images (992 × 744 pixels) was developed, and four types of components were included. Experiments on the dataset showed the model was superior in speed while being more lightweight and more accurate. The speed of our model showed promising results, with twice that of Mask R-CNN. In addition, our model was 0.35 times the size of Mask R-CNN, and the average precision (AP) of our model was improved by about two points compared to Mask R-CNN.


Robotics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 54
Author(s):  
Omar Al-Buraiki ◽  
Wenbo Wu ◽  
Pierre Payeur

Task allocation for specialized unmanned robotic agents is addressed in this paper. Based on the assumptions that each individual robotic agent possesses specialized capabilities and that targets representing the tasks to be performed in the surrounding environment impose specific requirements, the proposed approach computes task-agent fitting probabilities to efficiently match the available robotic agents with the detected targets. The framework is supported by a deep learning method with an object instance segmentation capability, Mask R-CNN, that is adapted to provide target object recognition and localization estimates from vision sensors mounted on the robotic agents. Experimental validation, for indoor search-and-rescue (SAR) scenarios, is conducted and results demonstrate the reliability and efficiency of the proposed approach.


2019 ◽  
Vol 5 ◽  
pp. e222
Author(s):  
Matthew Z. Wong ◽  
Kiyohito Kunii ◽  
Max Baylis ◽  
Wai Hong Ong ◽  
Pavel Kroupa ◽  
...  

The availability of large image data sets has been a crucial factor in the success of deep learning-based classification and detection methods. Yet, while data sets for everyday objects are widely available, data for specific industrial use-cases (e.g., identifying packaged products in a warehouse) remains scarce. In such cases, the data sets have to be created from scratch, placing a crucial bottleneck on the deployment of deep learning techniques in industrial applications. We present work carried out in collaboration with a leading UK online supermarket, with the aim of creating a computer vision system capable of detecting and identifying unique supermarket products in a warehouse setting. To this end, we demonstrate a framework for using data synthesis to create an end-to-end deep learning pipeline, beginning with real-world objects and culminating in a trained model. Our method is based on the generation of a synthetic dataset from 3D models obtained by applying photogrammetry techniques to real-world objects. Using 100K synthetic images for 10 classes, an InceptionV3 convolutional neural network was trained, which achieved accuracy of 96% on a separately acquired test set of real supermarket product images. The image generation process supports automatic pixel annotation. This eliminates the prohibitively expensive manual annotation typically required for detection tasks. Based on this readily available data, a one-stage RetinaNet detector was trained on the synthetic, annotated images to produce a detector that can accurately localize and classify the specimen products in real-time.


2021 ◽  
Author(s):  
Hye-Won Hwang ◽  
Jun-Ho Moon ◽  
Min-Gyu Kim ◽  
Richard E. Donatelli ◽  
Shin-Jae Lee

ABSTRACT Objectives To compare an automated cephalometric analysis based on the latest deep learning method of automatically identifying cephalometric landmarks (AI) with previously published AI according to the test style of the worldwide AI challenges at the International Symposium on Biomedical Imaging conferences held by the Institute of Electrical and Electronics Engineers (IEEE ISBI). Materials and Methods This latest AI was developed by using a total of 1983 cephalograms as training data. In the training procedures, a modification of a contemporary deep learning method, YOLO version 3 algorithm, was applied. Test data consisted of 200 cephalograms. To follow the same test style of the AI challenges at IEEE ISBI, a human examiner manually identified the IEEE ISBI-designated 19 cephalometric landmarks, both in training and test data sets, which were used as references for comparison. Then, the latest AI and another human examiner independently detected the same landmarks in the test data set. The test results were compared by the measures that appeared at IEEE ISBI: the success detection rate (SDR) and the success classification rates (SCR). Results SDR of the latest AI in the 2-mm range was 75.5% and SCR was 81.5%. These were greater than any other previous AIs. Compared to the human examiners, AI showed a superior success classification rate in some cephalometric analysis measures. Conclusions This latest AI seems to have superior performance compared to previous AI methods. It also seems to demonstrate cephalometric analysis comparable to human examiners.


Sign in / Sign up

Export Citation Format

Share Document