scholarly journals Comparison of deep learning models in terms of multiple object detection on satellite images

Author(s):  
Ferdi Doğan ◽  
◽  
Ibrahim Turkoğlu ◽  

The images obtained by remote sensing contain important data about ground surface. It is an important issue to detect objects on the ground surface with these images. Deep learning models are known to give better results in studies on object detection. However, the superiority of the deep learning models over each other is unknown. For this reason, it should be clarified which model is superior in terms of object detection and which model should be used in studies. In this study, it was aimed to reveal the superiorities of deep learning models by comparing their performance in detecting multiple objects. By using 11 deep learning models that are frequently encountered in the literature, the application of detecting objects of 14 classes in the DOTA dataset were made. 49,053 objects in 888 images were used for training by using AlexNet, Vgg16, Vgg19, GoogleNet, SequezeeNet, Resnet18, Resnet50, Resnet101, Inceptionresnetv2, inceptionv3, DenseNet201 models. After the training, 13,772 objects consisting of 14 classes in 277 images were used for testing with RCNN, which is one of the object detection methods. The performance of each algorithm in 14 classes has been demonstrated by using Average Precision (AP) and Mean Average Precision (mAP) to measure the performance of the models from their metrics. In a particular class of each deep learning model, difference in performance was observed The model with the highest performance varies in each class. In the application, the most successful average mAP value of 14 classes was Vgg16 with 24.64, while the lowest was InceptionResnetV2 with 11.78. In this article, the success of deep learning models in detecting multiple objects has been demonstrated practically and it is thought to be an important resource for researchers who will study on this subject.

2019 ◽  
Vol 8 (3) ◽  
pp. 4894-4900

This paper uses a deep learning model called Faster R-CNN to detect and track objects in images. Two backbone networks such as ResNet-101 and VGG-16 are tested on a self-created dataset and PASCAL VOC dataset. Intersection over union (IoU) technique is used for the purpose of object tracking. The impacts of batch size, number of iterations and learning rate are analysed. The paper finds that ResNet-101 outperforms VGG-16 significantly by 13% on test data. This finding reinforces that deeper network is better in feature extractions and generalizations. IoU is able to track multiple objects and can identify the loss of track. The processing of frames per second is found to be 5 fps. The study has implications for many computer vision applications. For example, the deep learning based object detection and tracking can either augment the capability of LiDARs and Sensors or become an alternative to them in self-driving vehicles.


2021 ◽  
Author(s):  
Amandip Sangha ◽  
Mohammad Rizvi

AbstractImportanceState-of-the art performance is achieved with a deep learning object detection model for acne detection. There is little current research on object detection in dermatology and acne in particular. As such, this work is early in this field and achieves state of the art performance.ObjectiveTrain an object detection model on a publicly available data set of acne photos.Design, Setting, and ParticipantsA deep learning model is trained with cross validation on a data set of facial acne photos.Main Outcomes and MeasuresObject detection models for detecting acne for single-class (acne) and multi-class (four severity levels). We train and evaluate the models using standard metrics such as mean average precision (mAP). Then we manually evaluate the model predictions on the test set, and calculate accuracy in terms of precision, recall, F1, true and false positive and negative detections.ResultsWe achieve state-of-the art mean average precision [email protected] value of 37.97 for the single class acne detection task, and 26.50 for the 4-class acne detection task. Moreover, our manual evaluation shows that the single class detection model performs well on the validation set, achieving true positive 93.59 %, precision 96.45 % and recall 94.73 %.Conclusions and RelevanceWe are able to train a high-accuracy acne detection model using only a small publicly available data set of facial acne. Transfer learning on the pre-trained deep learning model yields good accuracy and high degree of transferability to patient submitted photographs. We also note that the training of standard architecture object detection models has given significantly better accuracy than more intricate and bespoke neural network architectures in the existing research literature.Key PointsQuestionCan deep learning-based acne detection models trained on a small data set of publicly available photos of patients with acne achieve high prediction accuracy?FindingsWe find that it is possible to train a reasonably good object detection model on a small, annotated data set of acne photos using standard deep learning architectures.MeaningDeep learning-based object detection models for acne detection can be a useful decision support tools for dermatologists treating acne patients in a digital clinical practice. It can prove a particularly useful tool for monitoring the time evolution of the acne disease state over prolonged time during follow-ups, as the model predictions give a quantifiable and comparable output for photographs over time. This is particularly helpful in teledermatological consultations, as a prediction model can be integrated in the patient-doctor remote communication.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 886 ◽  
Author(s):  
Zhixian Yang ◽  
Ruixia Dong ◽  
Hao Xu ◽  
Jinan Gu

Object-detection methods based on deep learning play an important role in achieving machine automation. In order to achieve fast and accurate autonomous detection of stacked electronic components, an instance segmentation method based on an improved Mask R-CNN algorithm was proposed. By optimizing the feature extraction network, the performance of Mask R-CNN was improved. A dataset of electronic components containing 1200 images (992 × 744 pixels) was developed, and four types of components were included. Experiments on the dataset showed the model was superior in speed while being more lightweight and more accurate. The speed of our model showed promising results, with twice that of Mask R-CNN. In addition, our model was 0.35 times the size of Mask R-CNN, and the average precision (AP) of our model was improved by about two points compared to Mask R-CNN.


Author(s):  
M. N. Favorskaya ◽  
L. C. Jain

Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep learning era allows to understand the current possibilities of CNN approach for visual analysis conducted by the human eyes’ tracking and digital image processing.Results:A survey reflects the recent advances in saliency detection using CNNs. Different models available in literature, such as static and dynamic 2D CNNs for salient object detection and 3D CNNs for salient event detection are discussed in the chronological order. It is worth noting that automatic salient event detection in durable videos became possible using the recently appeared 3D CNN combining with 2D CNN for salient audio detection. Also in this article, we have presented a short description of public image and video datasets with annotated salient objects or events, as well as the often used metrics for the results’ evaluation.Practical relevance:This survey is considered as a contribution in the study of rapidly developed deep learning methods with respect to the saliency detection in the images and videos.


2019 ◽  
Vol 9 (22) ◽  
pp. 4871 ◽  
Author(s):  
Quan Liu ◽  
Chen Feng ◽  
Zida Song ◽  
Joseph Louis ◽  
Jian Zhou

Earthmoving is an integral civil engineering operation of significance, and tracking its productivity requires the statistics of loads moved by dump trucks. Since current truck loads’ statistics methods are laborious, costly, and limited in application, this paper presents the framework of a novel, automated, non-contact field earthmoving quantity statistics (FEQS) for projects with large earthmoving demands that use uniform and uncovered trucks. The proposed FEQS framework utilizes field surveillance systems and adopts vision-based deep learning for full/empty-load truck classification as the core work. Since convolutional neural network (CNN) and its transfer learning (TL) forms are popular vision-based deep learning models and numerous in type, a comparison study is conducted to test the framework’s core work feasibility and evaluate the performance of different deep learning models in implementation. The comparison study involved 12 CNN or CNN-TL models in full/empty-load truck classification, and the results revealed that while several provided satisfactory performance, the VGG16-FineTune provided the optimal performance. This proved the core work feasibility of the proposed FEQS framework. Further discussion provides model choice suggestions that CNN-TL models are more feasible than CNN prototypes, and models that adopt different TL methods have advantages in either working accuracy or speed for different tasks.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


2021 ◽  
pp. 102177
Author(s):  
ZHENDONG WANG ◽  
YAODI LIU ◽  
DAOJING HE ◽  
SAMMY CHAN

Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 517
Author(s):  
Seong-heum Kim ◽  
Youngbae Hwang

Owing to recent advancements in deep learning methods and relevant databases, it is becoming increasingly easier to recognize 3D objects using only RGB images from single viewpoints. This study investigates the major breakthroughs and current progress in deep learning-based monocular 3D object detection. For relatively low-cost data acquisition systems without depth sensors or cameras at multiple viewpoints, we first consider existing databases with 2D RGB photos and their relevant attributes. Based on this simple sensor modality for practical applications, deep learning-based monocular 3D object detection methods that overcome significant research challenges are categorized and summarized. We present the key concepts and detailed descriptions of representative single-stage and multiple-stage detection solutions. In addition, we discuss the effectiveness of the detection models on their baseline benchmarks. Finally, we explore several directions for future research on monocular 3D object detection.


Author(s):  
Hsu-Heng Yen ◽  
Ping-Yu Wu ◽  
Pei-Yuan Su ◽  
Chia-Wei Yang ◽  
Yang-Yuan Chen ◽  
...  

Abstract Purpose Management of peptic ulcer bleeding is clinically challenging. Accurate characterization of the bleeding during endoscopy is key for endoscopic therapy. This study aimed to assess whether a deep learning model can aid in the classification of bleeding peptic ulcer disease. Methods Endoscopic still images of patients (n = 1694) with peptic ulcer bleeding for the last 5 years were retrieved and reviewed. Overall, 2289 images were collected for deep learning model training, and 449 images were validated for the performance test. Two expert endoscopists classified the images into different classes based on their appearance. Four deep learning models, including Mobile Net V2, VGG16, Inception V4, and ResNet50, were proposed and pre-trained by ImageNet with the established convolutional neural network algorithm. A comparison of the endoscopists and trained deep learning model was performed to evaluate the model’s performance on a dataset of 449 testing images. Results The results first presented the performance comparisons of four deep learning models. The Mobile Net V2 presented the optimal performance of the proposal models. The Mobile Net V2 was chosen for further comparing the performance with the diagnostic results obtained by one senior and one novice endoscopists. The sensitivity and specificity were acceptable for the prediction of “normal” lesions in both 3-class and 4-class classifications. For the 3-class category, the sensitivity and specificity were 94.83% and 92.36%, respectively. For the 4-class category, the sensitivity and specificity were 95.40% and 92.70%, respectively. The interobserver agreement of the testing dataset of the model was moderate to substantial with the senior endoscopist. The accuracy of the determination of endoscopic therapy required and high-risk endoscopic therapy of the deep learning model was higher than that of the novice endoscopist. Conclusions In this study, the deep learning model performed better than inexperienced endoscopists. Further improvement of the model may aid in clinical decision-making during clinical practice, especially for trainee endoscopist.


2021 ◽  
Vol 11 (13) ◽  
pp. 6085
Author(s):  
Jesus Salido ◽  
Vanesa Lomas ◽  
Jesus Ruiz-Santaquiteria ◽  
Oscar Deniz

There is a great need to implement preventive mechanisms against shootings and terrorist acts in public spaces with a large influx of people. While surveillance cameras have become common, the need for monitoring 24/7 and real-time response requires automatic detection methods. This paper presents a study based on three convolutional neural network (CNN) models applied to the automatic detection of handguns in video surveillance images. It aims to investigate the reduction of false positives by including pose information associated with the way the handguns are held in the images belonging to the training dataset. The results highlighted the best average precision (96.36%) and recall (97.23%) obtained by RetinaNet fine-tuned with the unfrozen ResNet-50 backbone and the best precision (96.23%) and F1 score values (93.36%) obtained by YOLOv3 when it was trained on the dataset including pose information. This last architecture was the only one that showed a consistent improvement—around 2%—when pose information was expressly considered during training.


Sign in / Sign up

Export Citation Format

Share Document