scholarly journals Deep Convolutional Neural Networks Object Detector for Real-Time Waste Identification

2020 ◽  
Vol 10 (20) ◽  
pp. 7301
Author(s):  
Daniel Octavian Melinte ◽  
Ana-Maria Travediu ◽  
Dan N. Dumitriu

This paper presents an extensive research carried out for enhancing the performances of convolutional neural network (CNN) object detectors applied to municipal waste identification. In order to obtain an accurate and fast CNN architecture, several types of Single Shot Detectors (SSD) and Regional Proposal Networks (RPN) have been fine-tuned on the TrashNet database. The network with the best performances is executed on one autonomous robot system, which is able to collect detected waste from the ground based on the CNN feedback. For this type of application, a precise identification of municipal waste objects is very important. In order to develop a straightforward pipeline for waste detection, the paper focuses on boosting the performance of pre-trained CNN Object Detectors, in terms of precision, generalization, and detection speed, using different loss optimization methods, database augmentation, and asynchronous threading at inference time. The pipeline consists of data augmentation at the training time followed by CNN feature extraction and box predictor modules for localization and classification at different feature map sizes. The trained model is generated for inference afterwards. The experiments revealed better performances than all other Object Detectors trained on TrashNet or other garbage datasets with a precision of 97.63% accuracy for SSD and 95.76% accuracy for Faster R-CNN, respectively. In order to find the optimal higher and lower bounds of our learning rate where the network is actually learning, we trained our model for several epochs, updating the learning rate after each epoch, starting from 1 × 10−10 and decreasing it until reaching 1 × 10−1.

Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2399 ◽  
Author(s):  
Cunwei Sun ◽  
Yuxin Yang ◽  
Chang Wen ◽  
Kai Xie ◽  
Fangqing Wen

The convolutional neural network (CNN) has made great strides in the area of voiceprint recognition; but it needs a huge number of data samples to train a deep neural network. In practice, it is too difficult to get a large number of training samples, and it cannot achieve a better convergence state due to the limited dataset. In order to solve this question, a new method using a deep migration hybrid model is put forward, which makes it easier to realize voiceprint recognition for small samples. Firstly, it uses Transfer Learning to transfer the trained network from the big sample voiceprint dataset to our limited voiceprint dataset for the further training. Fully-connected layers of a pre-training model are replaced by restricted Boltzmann machine layers. Secondly, the approach of Data Augmentation is adopted to increase the number of voiceprint datasets. Finally, we introduce fast batch normalization algorithms to improve the speed of the network convergence and shorten the training time. Our new voiceprint recognition approach uses the TLCNN-RBM (convolutional neural network mixed restricted Boltzmann machine based on transfer learning) model, which is the deep migration hybrid model that is used to achieve an average accuracy of over 97%, which is higher than that when using either CNN or the TL-CNN network (convolutional neural network based on transfer learning). Thus, an effective method for a small sample of voiceprint recognition has been provided.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2393 ◽  
Author(s):  
Daniel Octavian Melinte ◽  
Luige Vladareanu

The interaction between humans and an NAO robot using deep convolutional neural networks (CNN) is presented in this paper based on an innovative end-to-end pipeline method that applies two optimized CNNs, one for face recognition (FR) and another one for the facial expression recognition (FER) in order to obtain real-time inference speed for the entire process. Two different models for FR are considered, one known to be very accurate, but has low inference speed (faster region-based convolutional neural network), and one that is not as accurate but has high inference speed (single shot detector convolutional neural network). For emotion recognition transfer learning and fine-tuning of three CNN models (VGG, Inception V3 and ResNet) has been used. The overall results show that single shot detector convolutional neural network (SSD CNN) and faster region-based convolutional neural network (Faster R-CNN) models for face detection share almost the same accuracy: 97.8% for Faster R-CNN on PASCAL visual object classes (PASCAL VOCs) evaluation metrics and 97.42% for SSD Inception. In terms of FER, ResNet obtained the highest training accuracy (90.14%), while the visual geometry group (VGG) network had 87% accuracy and Inception V3 reached 81%. The results show improvements over 10% when using two serialized CNN, instead of using only the FER CNN, while the recent optimization model, called rectified adaptive moment optimization (RAdam), lead to a better generalization and accuracy improvement of 3%-4% on each emotion recognition CNN.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Yao Chen ◽  
Tao Duan ◽  
Changyuan Wang ◽  
Yuanyuan Zhang ◽  
Mo Huang

Ship detection on synthetic aperture radar (SAR) imagery has many valuable applications for both civil and military fields and has received extraordinary attention in recent years. The traditional detection methods are insensitive to multiscale ships and usually time-consuming, results in low detection accuracy and limitation for real-time processing. To balance the accuracy and speed, an end-to-end ship detection method for complex inshore and offshore scenes based on deep convolutional neural networks (CNNs) is proposed in this paper. First, the SAR images are divided into different grids, and the anchor boxes are predefined based on the responsible grids for dense ship prediction. Then, Darknet-53 with residual units is adopted as a backbone to extract features, and a top-down pyramid structure is added for multiscale feature fusion with concatenation. By this means, abundant hierarchical features containing both spatial and semantic information are extracted. Meanwhile, the strategies such as soft non-maximum suppression (Soft-NMS), mix-up and mosaic data augmentation, multiscale training, and hybrid optimization are used for performance enhancement. Besides, the model is trained from scratch to avoid learning objective bias of pretraining. The proposed one-stage method adopts end-to-end inference by a single network, so the detection speed can be guaranteed due to the concise paradigm. Extensive experiments are performed on the public SAR ship detection dataset (SSDD), and the results show that the method can detect both inshore and offshore ships with higher accuracy than other mainstream methods, yielding the accuracy with an average of 95.52%, and the detection speed is quite fast with about 72 frames per second (FPS). The actual Sentinel-1 and Gaofen-3 data are utilized for verification, and the detection results also show the effectiveness and robustness of the method.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Emre Kiyak ◽  
Gulay Unal

Purpose The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft. Design/methodology/approach First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed. Findings The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%. Originality/value Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Qiao Meng ◽  
Huansheng Song ◽  
Gang Li ◽  
Yu’an Zhang ◽  
Xiangqing Zhang

Nowadays, automatic multi-objective detection remains a challenging problem for autonomous vehicle technologies. In the past decades, deep learning has been demonstrated successful for multi-objective detection, such as the Single Shot Multibox Detector (SSD) model. The current trend is to train the deep Convolutional Neural Networks (CNNs) with online autonomous vehicle datasets. However, network performance usually degrades when small objects are detected. Moreover, the existing autonomous vehicle datasets could not meet the need for domestic traffic environment. To improve the detection performance of small objects and ensure the validity of the dataset, we propose a new method. Specifically, the original images are divided into blocks as input to a VGG-16 network which add the feature map fusion after CNNs. Moreover, the image pyramid is built to project all the blocks detection results at the original objects size as much as possible. In addition to improving the detection method, a new autonomous driving vehicle dataset is created, in which the object categories and labelling criteria are defined, and a data augmentation method is proposed. The experimental results on the new datasets show that the performance of the proposed method is greatly improved, especially for small objects detection in large image. Moreover, the proposed method is adaptive to complex climatic conditions and contributes a lot for autonomous vehicle perception and planning.


2020 ◽  
Vol 12 (1) ◽  
pp. 1
Author(s):  
Vivian Alfionita Sutama ◽  
Suryo Adhi Wibowo ◽  
Rissa Rahmania

Nowadays, Artificial Intelligence is one of the most developing technology, especially on Augmented Reality (AR). AR is a technology which connected between real world and virtual in a real time that allows user to interact directly and display it in 3D. AR technology has two methods, that are AR based on marker and AR based on markerless. However, AR based on marker need an object detection system which has high performance as an interaction tools between user and the device. Single shot multibox detector (SSD) is an object detection algorithm that has fast learning computation and good performance. This method is affected by some parameters like number of epoch, learning rate, batch size, step training, etc. However, to create a good system it took a long process such as taking dataset, labelling process, then training and testing models to gain the best performance. In this experiment, we analyze SSD method in AR technology using inception architecture as pre-trained Convolutional neural network (CNN), and then do transfer learning to minimize amount training time. The configuration that used is the number of step training. The result of this experiment gets the best accuracy in 70.17%. Then, the best performance is used as an object detection model for marker’s AR technology.Abstrak Saat ini, Artificial intelligence merupakan teknologi yang sedang berkembang pesat. Salah satunya adalah teknologi Augmented Reality (AR). AR adalah teknologi yang menggabungkan dunia nyata dengan virtual secara real-time dengan interaksi pengguna secara langsung dan menampilkannya dalam bentuk 3D. Teknologi AR ini memiliki dua metode yaitu dengan marker dan markerless. Dalam perkembangannya, AR berbasis marker membutuhkan sistem deteksi objek yang memiliki performa tinggi sebagai alat interaksi antara pengguna dengan perangkatnya. Single shot multibox detector (SSD) merupakan algoritma deteksi objek yang memiliki komputasi pembelajaran dan kinerja yang baik. Metode ini dipengaruhi oleh beberapa parameter seperti jumlah lapisan konvolusi, epoch, learning rate, jumlah batch, step training, dll. Namun, dalam mengimplementasikannya diperlukan proses yang cukup panjang seperti, pengambilan dataset, proses pelabelan, proses pelatihan menggunakan metode SSD, dan melakukan pengujian terhadap beberapa model untuk mencari perfomansi paling baik. Dalam percobaan ini, kami melakukan analisis terhadap metode SSD pada teknologi AR menggunakan arsitektur Inception sebagai pre-trained Convolutional neural network (CNN), kemudian dilakukan transfer learning untuk memperkecil jumlah kelas data pelatihan dan waktu pelatihan data. Konfigurasi yang digunakan berupa jumlah step pada pelatihan. Hasil dari penilitian ini menunjukan akurasi terbaik sebesar 70,17%. Kemudian, perfomansi terbaik digunakan sebagai model deteksi objek untuk marker pada teknologi AR.


2021 ◽  
Vol 26 (1) ◽  
Author(s):  
Maksym Oleksandrovych Yaroshenko ◽  
Anton Yuriiovych Varfolomieiev ◽  
Petro Oleksiyovych Yaganov

Due to the high price of thermal imaging sensors, methods for high quality upscaling of infrared images, acquired from low-resolution inexpensive IR-cameras become in high demand. One of the very promising branches of such kinds of methods is base on super-resolution (SR) techniques that exploit convolutional neural networks (CNN), which are developed rapidly for the last decade. During the review of existing solutions, we found that most of the super-resolution neural networks are intended for the upscaling of images in the visible spectrum band. Among them, the BCLSR network has proven to be one of the best solutions that ensure a very high quality of image upscaling. Thus, we selected this network for further investigation in the current paper. Namely, in this research, we trained and tested the BCLSR network for upscaling of far-infrared (FIR) images for the first time. Moreover, inspired by the BCLSR architecture, we proposed our own neural network, which defers from the BCLSR by the absence of recursive and recurrent layers that were replaced by series-connected Residual- and parallel-connected Inception-like blocks correspondingly. During the tests, we found that the suggested modifications permit to increase the network inference speed almost twice and even improve the quality of upscaling by 0,063 dB compared to the basic BCLSR implementation. Networks were trained and tested using the CVC-14 dataset that contains FIR images acquired at the night. We used data augmentation with random dividing dataset images onto 100×100 pixel patches and with subsequent application random brightness, contrast, and mirroring to the obtained patches. The training procedure was performed in a single cycle with single increase and decrease of the learning rate and used the same parameters for the proposed and the BCLSR networks. We employed the Adam optimizer for the training of both networks. Nevertheless, the proposed model has more parameters (2,7 М) compared to the BCLSR (0,6 М), both of the networks can be considered as the small ones, and thus can be used in applications for conventional personal computers, as well as in embedded solutions. The direction of the further research can be focused on the improvements of the proposed network architecture by introducing new types of layers as well as on the modifying of hyperparameters of the used layers. The quality of the upscaling can be increased also by using other loss functions and by the change of learning rate-varying strategies.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Seungmin Han ◽  
Seokju Oh ◽  
Jongpil Jeong

Bearings are one of the most important parts of a rotating machine. Bearing failure can lead to mechanical failure, financial loss, and even personal injury. In recent years, various deep learning techniques have been used to diagnose bearing faults in rotating machines. However, deep learning technology has a data imbalance problem because it requires huge amounts of data. To solve this problem, we used data augmentation techniques. In addition, Convolutional Neural Network, one of the deep learning models, is a method capable of performing feature learning without prior knowledge. However, since conventional fault diagnosis based on CNN can only extract single-scale features, not only useful information may be lost but also domain shift problems may occur. In this paper, we proposed a Multiscale Convolutional Neural Network (MSCNN) to extract more powerful and differentiated features from raw signals. MSCNN can learn more powerful feature expression than conventional CNN through multiscale convolution operation and reduce the number of parameters and training time. The proposed model proved better results and validated the effectiveness of the model compared to 2D-CNN and 1D-CNN.


Author(s):  
Ramaprasad Poojary ◽  
Roma Raina ◽  
Amit Kumar Mondal

<span id="docs-internal-guid-cdb76bbb-7fff-978d-961c-e21c41807064"><span>During the last few years, deep learning achieved remarkable results in the field of machine learning when used for computer vision tasks. Among many of its architectures, deep neural network-based architecture known as convolutional neural networks are recently used widely for image detection and classification. Although it is a great tool for computer vision tasks, it demands a large amount of training data to yield high performance. In this paper, the data augmentation method is proposed to overcome the challenges faced due to a lack of insufficient training data. To analyze the effect of data augmentation, the proposed method uses two convolutional neural network architectures. To minimize the training time without compromising accuracy, models are built by fine-tuning pre-trained networks VGG16 and ResNet50. To evaluate the performance of the models, loss functions and accuracies are used. Proposed models are constructed using Keras deep learning framework and models are trained on a custom dataset created from Kaggle CAT vs DOG database. Experimental results showed that both the models achieved better test accuracy when data augmentation is employed, and model constructed using ResNet50 outperformed VGG16 based model with a test accuracy of 90% with data augmentation &amp; 82% without data augmentation.</span></span>


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4720
Author(s):  
Yujia Zhang ◽  
Lai-Man Po ◽  
Jingjing Xiong ◽  
Yasar Abbas Ur REHMAN ◽  
Kwok-Wai Cheung

Human action recognition methods in videos based on deep convolutional neural networks usually use random cropping or its variants for data augmentation. However, this traditional data augmentation approach may generate many non-informative samples (video patches covering only a small part of the foreground or only the background) that are not related to a specific action. These samples can be regarded as noisy samples with incorrect labels, which reduces the overall action recognition performance. In this paper, we attempt to mitigate the impact of noisy samples by proposing an Auto-augmented Siamese Neural Network (ASNet). In this framework, we propose backpropagating salient patches and randomly cropped samples in the same iteration to perform gradient compensation to alleviate the adverse gradient effects of non-informative samples. Salient patches refer to the samples containing critical information for human action recognition. The generation of salient patches is formulated as a Markov decision process, and a reinforcement learning agent called SPA (Salient Patch Agent) is introduced to extract patches in a weakly supervised manner without extra labels. Extensive experiments were conducted on two well-known datasets UCF-101 and HMDB-51 to verify the effectiveness of the proposed SPA and ASNet.


Sign in / Sign up

Export Citation Format

Share Document