scholarly journals Anomaly Segmentation Based on Depth Image for Quality Inspection Processes in Tire Manufacturing

2021 ◽  
Vol 11 (21) ◽  
pp. 10376
Author(s):  
Dongbeom Ko ◽  
Sungjoo Kang ◽  
Hyunsuk Kim ◽  
Wongok Lee ◽  
Yousuk Bae ◽  
...  

This paper introduces and implements an efficient training method for deep learning–based anomaly area detection in the depth image of a tire. A depth image of 16 bit integer size is used in various fields, such as manufacturing, industry, and medicine. In addition, the advent of the 4th Industrial Revolution and the development of deep learning require deep learning–based problem solving in various fields. Accordingly, various research efforts use deep learning technology to detect errors, such as product defects and diseases, in depth images. However, a depth image expressed in grayscale has limited information, compared with a three-channel image with potential colors, shapes, and brightness. In addition, in the case of tires, despite the same defect, they often have different sizes and shapes, making it difficult to train deep learning. Therefore, in this paper, the four-step process of (1) image input, (2) highlight image generation, (3) image stacking, and (4) image training is applied to a deep learning segmentation model that can detect atypical defect data. Defect detection aims to detect vent spews that occur during tire manufacturing. We compare the training results of applying the process proposed in this paper and the general training result for experiment and evaluation. For evaluation, we use intersection of union (IoU), which compares the pixel area where the actual error is located in the depth image and the pixel area of the error inferred by the deep learning network. The results of the experiment confirmed that the proposed methodology improved the mean IoU by more than 7% and the IoU for the vent spew error by more than 10%, compared to the general method. In addition, the time it takes for the mean IoU to remain stable at 60% is reduced by 80%. The experiments and results prove that the methodology proposed in this paper can train efficiently without losing the information of the original depth data.

Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 68
Author(s):  
Jiwei Fan ◽  
Xiaogang Yang ◽  
Ruitao Lu ◽  
Xueli Xie ◽  
Weipeng Li

Unmanned aerial vehicles (UAV) and related technologies have played an active role in the prevention and control of novel coronaviruses at home and abroad, especially in epidemic prevention, surveillance, and elimination. However, the existing UAVs have a single function, limited processing capacity, and poor interaction. To overcome these shortcomings, we designed an intelligent anti-epidemic patrol detection and warning flight system, which integrates UAV autonomous navigation, deep learning, intelligent voice, and other technologies. Based on the convolution neural network and deep learning technology, the system possesses a crowd density detection method and a face mask detection method, which can detect the position of dense crowds. Intelligent voice alarm technology was used to achieve an intelligent alarm system for abnormal situations, such as crowd-gathering areas and people without masks, and to carry out intelligent dissemination of epidemic prevention policies, which provides a powerful technical means for epidemic prevention and delaying their spread. To verify the superiority and feasibility of the system, high-precision online analysis was carried out for the crowd in the inspection area, and pedestrians’ faces were detected on the ground to identify whether they were wearing a mask. The experimental results show that the mean absolute error (MAE) of the crowd density detection was less than 8.4, and the mean average precision (mAP) of face mask detection was 61.42%. The system can provide convenient and accurate evaluation information for decision-makers and meets the requirements of real-time and accurate detection.


2019 ◽  
Vol 1 (3) ◽  
pp. 883-903 ◽  
Author(s):  
Daulet Baimukashev ◽  
Alikhan Zhilisbayev ◽  
Askat Kuzdeuov ◽  
Artemiy Oleinikov ◽  
Denis Fadeyev ◽  
...  

Recognizing objects and estimating their poses have a wide range of application in robotics. For instance, to grasp objects, robots need the position and orientation of objects in 3D. The task becomes challenging in a cluttered environment with different types of objects. A popular approach to tackle this problem is to utilize a deep neural network for object recognition. However, deep learning-based object detection in cluttered environments requires a substantial amount of data. Collection of these data requires time and extensive human labor for manual labeling. In this study, our objective was the development and validation of a deep object recognition framework using a synthetic depth image dataset. We synthetically generated a depth image dataset of 22 objects randomly placed in a 0.5 m × 0.5 m × 0.1 m box, and automatically labeled all objects with an occlusion rate below 70%. Faster Region Convolutional Neural Network (R-CNN) architecture was adopted for training using a dataset of 800,000 synthetic depth images, and its performance was tested on a real-world depth image dataset consisting of 2000 samples. Deep object recognizer has 40.96% detection accuracy on the real depth images and 93.5% on the synthetic depth images. Training the deep learning model with noise-added synthetic images improves the recognition accuracy for real images to 46.3%. The object detection framework can be trained on synthetically generated depth data, and then employed for object recognition on the real depth data in a cluttered environment. Synthetic depth data-based deep object detection has the potential to substantially decrease the time and human effort required for the extensive data collection and labeling.


2021 ◽  
Vol 2083 (3) ◽  
pp. 032088
Author(s):  
Xingxing Li ◽  
Panpan Yin ◽  
Chao Duan ◽  
Ningxing Wang

Abstract With the development of the new energy industry, a large number of silicon wafers need to be tested for production quality through the automation industry. The development of deep learning technology has brought huge technological improvements to the industrial quality inspection industry. Through the image segmentation technology based on deep learning, it can accurately divide the defects existing in the silicon wafer. In this paper, the UNet deep learning network is used to segment the hidden cracks in the silicon wafer. The network can extract the shallow semantic features in the silicon wafer well. It uses 5,000 samples collected on the industrial site as the training set,1,000 pieces the sample is used as the test set, and the segmentation accuracy IOU can reach 58.7%.


Author(s):  
Zainab Mushtaq

Abstract: Malware is routinely used for illegal reasons, and new malware variants are discovered every day. Computer vision in computer security is one of the most significant disciplines of research today, and it has witnessed tremendous growth in the preceding decade due to its efficacy. We employed research in machine-learning and deep-learning technology such as Logistic Regression, ANN, CNN, transfer learning on CNN, and LSTM to arrive at our conclusions. We have published analysis-based results from a range of categorization models in the literature. InceptionV3 was trained using a transfer learning technique, which yielded reasonable results when compared with other methods such as LSTM. On the test dataset, the transferring learning technique was about 98.76 percent accurate, while on the train dataset, it was around 99.6 percent accurate. Keywords: Malware, illegal activity, Deep learning, Network Security,


2021 ◽  
Vol 9 (1) ◽  
pp. 20
Author(s):  
Yuki Yamamoto ◽  
Takenao Ohkawa ◽  
Chikara Ohta ◽  
Kenji Oyama ◽  
Ryo Nishide

We are developing a system to estimate body weight using calf depth images taken in a loose barn. For this purpose, depth images should be taken from the side, without calves overlapping and without their backs bent. However, most of the depth images that are taken successively and automatically do not satisfy these conditions. Therefore, we need to select only the depth images that match these conditions, as to take many images as possible. The existing method assumes that a calf standing sideways and upright in front of cameras is in a suitable pose. However, since such cases rarely occur, not many images were selected. This paper proposes a new depth image-selection method, focusing on whether a calf is sideways, and the back is not bent, regardless of whether the calf is still or walking. First, depth images including only a single calf are extracted. The calf was identified using radio frequency identification (RFID) when its depth image was taken. Then, the calf area was extracted by background subtraction and contour detection with a depth image. Finally, to judge the usable depth images, we detected and evaluated the calf’s posture, such as the angle of the calf to the camera and the slope of the dorsal line. We used the mean absolute percentage error (MAPE) to assess the efficiency of our method. As two times the number of depth images were extracted, our method achieved an MAPE of 12.45%, while the existing method achieved an MAPE of 13.87%. From this result, we have confirmed that our method makes body weight estimation more accurate.


2021 ◽  
Vol 8 (1) ◽  
pp. 1-16
Author(s):  
Steven Anderson ◽  
Ansarullah Lawi

Technological development prior to industrial revolution 4.0 incentivized manufacturing industries to invest into digital industry with the aim of increasing the capability and efficiency in manufacturing activity. Major manufacturing industry has begun implementing cyber-physical system in industrial monitoring and control. The system itself will generate large volumes of data. The ability to process those big data requires algorithm called machine learning because of its ability to read patterns of big data for producing useful information. This study conducted on premises of Indonesia’s current network infrastructure and workforce capability on supporting the implementation of machine learning especially in large-scale manufacture. That will be compared with countries that have a positive stance in implementing machine learning in manufacturing. The conclusions that can be drawn from this research are Indonesia current infrastructure and workforce is still unable to fully support the implementation of machine learning technology in manufacturing industry and improvements are needed.


2021 ◽  
Author(s):  
Armin Masoumian ◽  
David G.F. Marei ◽  
Saddam Abdulwahab ◽  
Julián Cristiano ◽  
Domenec Puig ◽  
...  

Determining the distance between the objects in a scene and the camera sensor from 2D images is feasible by estimating depth images using stereo cameras or 3D cameras. The outcome of depth estimation is relative distances that can be used to calculate absolute distances to be applicable in reality. However, distance estimation is very challenging using 2D monocular cameras. This paper presents a deep learning framework that consists of two deep networks for depth estimation and object detection using a single image. Firstly, objects in the scene are detected and localized using the You Only Look Once (YOLOv5) network. In parallel, the estimated depth image is computed using a deep autoencoder network to detect the relative distances. The proposed object detection based YOLO was trained using a supervised learning technique, in turn, the network of depth estimation was self-supervised training. The presented distance estimation framework was evaluated on real images of outdoor scenes. The achieved results show that the proposed framework is promising and it yields an accuracy of 96% with RMSE of 0.203 of the correct absolute distance.


Author(s):  
Mohamed Sayed Farag ◽  
Mostafa Mohamed Mohie El Din ◽  
Hassan Ahmed Elshenbary

<span>Due to the increase in number of cars and slow city developments, there is a need for smart parking system. One of the main issues in smart parking systems is parking lot occupancy status classification, so this paper introduce two methods for parking lot classification. The first method uses the mean, after converting the colored image to grayscale, then to black/white. If the mean is greater than a given threshold it is classified as occupied, otherwise it is empty. This method gave 90% correct classification rate on cnrall database. It overcome the alexnet deep learning method trained and tested on the same database (the mean method has no training time). The second method, which depends on deep learning is a deep learning neural network consists of 11 layers, trained and tested on the same database. It gave 93% correct classification rate, when trained on cnrall and tested on the same database. As shown, this method overcome the alexnet deep learning and the mean methods on the same database. On the Pklot database the alexnet and our deep learning network have a close resutls, overcome <br /> the mean method (greater than 95%).</span>


2019 ◽  
Author(s):  
Zhou Hang ◽  
Li Shiwei ◽  
Huang Qing ◽  
Liu Shijie ◽  
Quan Tingwei ◽  
...  

AbstractDeep learning technology enables us acquire high resolution image from low resolution image in biological imaging free from sophisticated optical hardware. However, current methods require a huge number of the precisely registered low-resolution (LR) and high-resolution (HR) volume image pairs. This requirement is challengeable for biological volume imaging. Here, we proposed 3D deep learning network based on dual generative adversarial network (dual-GAN) framework for recovering HR volume images from LR volume images. Our network avoids learning the direct mappings from the LR and HR volume image pairs, which need precisely image registration process. And the cycle consistent network makes the predicted HR volume image faithful to its corresponding LR volume image. The proposed method achieves the recovery of 20x/1.0 NA volume images from 5x/0.16 NA volume images collected by light-sheet microscopy. In essence our method is suitable for the other imaging modalities.


2021 ◽  
Vol 261 ◽  
pp. 01021
Author(s):  
Jiwei Li ◽  
Linsheng Li ◽  
Changlu Xu

In the field of defect recognition, deep learning technology has the advantages of strong generalization and high accuracy compared with mainstream machine learning technology. This paper proposes a deep learning network model, which first processes the self-made 3, 600 data sets, and then sends them to the built convolutional neural network model for training. The final result can effectively identify the three defects of lithium battery pole pieces. The accuracy rate is 92%. Compared with the structure of the AlexNet model, the model proposed in this paper has higher accuracy.


Sign in / Sign up

Export Citation Format

Share Document