scholarly journals Analysis of the Possibilities of Tire-Defect Inspection Based on Unsupervised Learning and Deep Learning

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7073
Author(s):  
Ivan Kuric ◽  
Jaromír Klarák ◽  
Milan Sága ◽  
Miroslav Císar ◽  
Adrián Hajdučík ◽  
...  

At present, inspection systems process visual data captured by cameras, with deep learning approaches applied to detect defects. Defect detection results usually have an accuracy higher than 94%. Real-life applications, however, are not very common. In this paper, we describe the development of a tire inspection system for the tire industry. We provide methods for processing tire sidewall data obtained from a camera and a laser sensor. The captured data comprise visual and geometric data characterizing the tire surface, providing a real representation of the captured tire sidewall. We use an unfolding process, that is, a polar transform, to further process the camera-obtained data. The principles and automation of the designed polar transform, based on polynomial regression (i.e., supervised learning), are presented. Based on the data from the laser sensor, the detection of abnormalities is performed using an unsupervised clustering method, followed by the classification of defects using the VGG-16 neural network. The inspection system aims to detect trained and untrained abnormalities, namely defects, as opposed to using only supervised learning methods.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5039
Author(s):  
Tae-Hyun Kim ◽  
Hye-Rin Kim ◽  
Yeong-Jun Cho

In this study, we present a framework for product quality inspection based on deep learning techniques. First, we categorize several deep learning models that can be applied to product inspection systems. In addition, we explain the steps for building a deep-learning-based inspection system in detail. Second, we address connection schemes that efficiently link deep learning models to product inspection systems. Finally, we propose an effective method that can maintain and enhance a product inspection system according to improvement goals of the existing product inspection systems. The proposed system is observed to possess good system maintenance and stability owing to the proposed methods. All the proposed methods are integrated into a unified framework and we provide detailed explanations of each proposed method. In order to verify the effectiveness of the proposed system, we compare and analyze the performance of the methods in various test scenarios. We expect that our study will provide useful guidelines to readers who desire to implement deep-learning-based systems for product inspection.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7531
Author(s):  
Jaromír Klarák ◽  
Ivan Kuric ◽  
Ivan Zajačko ◽  
Vladimír Bulej ◽  
Vladimír Tlach ◽  
...  

Inspection systems are currently an evolving field in the industry. The main goal is to provide a picture of the quality of intermediates and products in the production process. The most widespread sensory system is camera equipment. This article describes the implementation of camera devices for checking the location of the upper on the shoe last. The next part of the article deals with the analysis of the application of laser sensors in this task. The results point to the clear advantages of laser sensors in the inspection task of placing the uppers on the shoe’s last. The proposed method defined the resolution of laser scanners according to the type of scanned surface, where the resolution of point cloud ranged from 0.16 to 0.5 mm per point based on equations representing specific points approximated to polynomial regression in specific places, which are defined in this article. Next, two inspection systems were described, where one included further development in the field of automation and industry 4.0 and with a high perspective of development into the future. The main aim of this work is to conduct analyses of sensory systems for inspection systems and their possibilities for further work mainly based on the resolution and quality of obtained data. For instance, dependency on scanning complex surfaces and the achieved resolution of scanned surfaces.


2021 ◽  
pp. 249-252
Author(s):  
Shahana Parveen ◽  
Nisheena V Iqbal

Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. Several efforts have been carried out to enhance dexterous hand prosthesis control by impaired individuals. However, the control robustness offered by scientic research is still not sufcient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. This paper reviews various papers on deep learning approaches to the control of prosthetic hands with EMG signals and made a comparison on their accuracy.


Author(s):  
Saddam Bekhet ◽  
Abdullah M. Alghamdi ◽  
Islam F. Taj-Eddin

<p>Human gender recognition is an essential demographic tool. This is reflected in forensic science, surveillance systems and targeted marketing applications. This research was always driven using standard face images and hand-crafted features. Such way has achieved good results, however, the reliability of the facial images had a great effect on the robustness of extracted features, where any small change in the query facial image could change the results. Nevertheless, the performance of current techniques in unconstrained environments is still inefficient, especially when contrasted against recent breakthroughs in different computer vision research. This paper introduces a novel technique for human gender recognition from non-standard selfie images using deep learning approaches. Selfie photos are uncontrolled partial or full-frontal body images that are usually taken by people themselves in real-life environment. As far as we know this is the first paper of its kind to identify gender from selfie photos, using deep learning approach. The experimental results on the selfie dataset emphasizes the proposed technique effectiveness in recognizing gender from such images with 89% accuracy. The performance is further consolidated by testing on numerous benchmark datasets that are widely used in the field, namely: Adience, LFW, FERET, NIVE, Caltech WebFaces and<br />CAS-PEAL-R1.</p>


Author(s):  
Mohamed Nadif ◽  
François Role

Abstract Biomedical scientific literature is growing at a very rapid pace, which makes increasingly difficult for human experts to spot the most relevant results hidden in the papers. Automatized information extraction tools based on text mining techniques are therefore needed to assist them in this task. In the last few years, deep neural networks-based techniques have significantly contributed to advance the state-of-the-art in this research area. Although the contribution to this progress made by supervised methods is relatively well-known, this is less so for other kinds of learning, namely unsupervised and self-supervised learning. Unsupervised learning is a kind of learning that does not require the cost of creating labels, which is very useful in the exploratory stages of a biomedical study where agile techniques are needed to rapidly explore many paths. In particular, clustering techniques applied to biomedical text mining allow to gather large sets of documents into more manageable groups. Deep learning techniques have allowed to produce new clustering-friendly representations of the data. On the other hand, self-supervised learning is a kind of supervised learning where the labels do not have to be manually created by humans, but are automatically derived from relations found in the input texts. In combination with innovative network architectures (e.g. transformer-based architectures), self-supervised techniques have allowed to design increasingly effective vector-based word representations (word embeddings). We show in this survey how word representations obtained in this way have proven to successfully interact with common supervised modules (e.g. classification networks) to whose performance they greatly contribute.


AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 497-511
Author(s):  
Theiab Alzahrani ◽  
Baidaa Al-Bander ◽  
Waleed Al-Nuaimy

Makeup can disguise facial features, which results in degradation in the performance of many facial-related analysis systems, including face recognition, facial landmark characterisation, aesthetic quantification and automated age estimation methods. Thus, facial makeup is likely to directly affect several real-life applications such as cosmetology and virtual cosmetics recommendation systems, security and access control, and social interaction. In this work, we conduct a comparative study and design automated facial makeup detection systems leveraging multiple learning schemes from a single unconstrained photograph. We have investigated and studied the efficacy of deep learning models for makeup detection incorporating the use of transfer learning strategy with semi-supervised learning using labelled and unlabelled data. First, during the supervised learning, the VGG16 convolution neural network, pre-trained on a large dataset, is fine-tuned on makeup labelled data. Secondly, two unsupervised learning methods, which are self-learning and convolutional auto-encoder, are trained on unlabelled data and then incorporated with supervised learning during semi-supervised learning. Comprehensive experiments and comparative analysis have been conducted on 2479 labelled images and 446 unlabelled images collected from six challenging makeup datasets. The obtained results reveal that the convolutional auto-encoder merged with supervised learning gives the best makeup detection performance achieving an accuracy of 88.33% and area under ROC curve of 95.15%. The promising results obtained from conducted experiments reveal and reflect the efficiency of combining different learning strategies by harnessing labelled and unlabelled data. It would also be advantageous to the beauty industry to develop such computational intelligence methods.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Author(s):  
Qian Wu ◽  
Weiling Zhao ◽  
Xiaobo Yang ◽  
Hua Tan ◽  
Lei You ◽  
...  

2020 ◽  
Author(s):  
Priyanka Meel ◽  
Farhin Bano ◽  
Dr. Dinesh K. Vishwakarma

Sign in / Sign up

Export Citation Format

Share Document