Deep learning for UAV autonomous landing based on self-built image dataset

Author(s):  
Yinbo Xu ◽  
Yongwei Zhang ◽  
Huan Liu ◽  
Xiangke Wang
Data in Brief ◽  
2021 ◽  
pp. 107133
Author(s):  
Deeksha Arya ◽  
Hiroya Maeda ◽  
Sanjay Kumar Ghosh ◽  
Durga Toshniwal ◽  
Yoshihide Sekimoto

Author(s):  
Herman Njoroge Chege

Point 1: Deep learning algorithms are revolutionizing how hypothesis generation, pattern recognition, and prediction occurs in the sciences. In the life sciences, particularly biology and its subfields,  the use of deep learning is slowly but steadily increasing. However, prototyping or development of tools for practical applications remains in the domain of experienced coders. Furthermore, many tools can be quite costly and difficult to put together without expertise in Artificial intelligence (AI) computing. Point 2: We built a biological species classifier that leverages existing open-source tools and libraries. We designed the corresponding tutorial for users with basic skills in python and a small, but well-curated image dataset. We included annotated code in form of a Jupyter Notebook that can be adapted to any image dataset, ranging from satellite images, animals to bacteria. The prototype developer is publicly available and can be adapted for citizen science as well as other applications not envisioned in this paper. Point 3: We illustrate our approach with a case study of 219 images of 3 three seastar species. We show that with minimal parameter tuning of the AI pipeline we can create a classifier with superior accuracy. We include additional approaches to understand the misclassified images and to curate the dataset to increase accuracy. Point 4: The power of AI approaches is becoming increasingly accessible. We can now readily build and prototype species classifiers that can have a great impact on research that requires species identification and other types of image analysis. Such tools have implications for citizen science, biodiversity monitoring, and a wide range of ecological applications.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Ahmed Jawad A. AlBdairi ◽  
Zhu Xiao ◽  
Mohammed Alghaili

The interest in face recognition studies has grown rapidly in the last decade. One of the most important problems in face recognition is the identification of ethnics of people. In this study, a new deep learning convolutional neural network is designed to create a new model that can recognize the ethnics of people through their facial features. The new dataset for ethnics of people consists of 3141 images collected from three different nationalities. To the best of our knowledge, this is the first image dataset collected for the ethnics of people and that dataset will be available for the research community. The new model was compared with two state-of-the-art models, VGG and Inception V3, and the validation accuracy was calculated for each convolutional neural network. The generated models have been tested through several images of people, and the results show that the best performance was achieved by our model with a verification accuracy of 96.9%.


2020 ◽  
Vol 39 (3) ◽  
Author(s):  
Chiraz Ajmi ◽  
Juan Zapata ◽  
José Javier Martínez-Álvarez ◽  
Ginés Doménech ◽  
Ramón Ruiz

2019 ◽  
Vol 1 (3) ◽  
pp. 883-903 ◽  
Author(s):  
Daulet Baimukashev ◽  
Alikhan Zhilisbayev ◽  
Askat Kuzdeuov ◽  
Artemiy Oleinikov ◽  
Denis Fadeyev ◽  
...  

Recognizing objects and estimating their poses have a wide range of application in robotics. For instance, to grasp objects, robots need the position and orientation of objects in 3D. The task becomes challenging in a cluttered environment with different types of objects. A popular approach to tackle this problem is to utilize a deep neural network for object recognition. However, deep learning-based object detection in cluttered environments requires a substantial amount of data. Collection of these data requires time and extensive human labor for manual labeling. In this study, our objective was the development and validation of a deep object recognition framework using a synthetic depth image dataset. We synthetically generated a depth image dataset of 22 objects randomly placed in a 0.5 m × 0.5 m × 0.1 m box, and automatically labeled all objects with an occlusion rate below 70%. Faster Region Convolutional Neural Network (R-CNN) architecture was adopted for training using a dataset of 800,000 synthetic depth images, and its performance was tested on a real-world depth image dataset consisting of 2000 samples. Deep object recognizer has 40.96% detection accuracy on the real depth images and 93.5% on the synthetic depth images. Training the deep learning model with noise-added synthetic images improves the recognition accuracy for real images to 46.3%. The object detection framework can be trained on synthetically generated depth data, and then employed for object recognition on the real depth data in a cluttered environment. Synthetic depth data-based deep object detection has the potential to substantially decrease the time and human effort required for the extensive data collection and labeling.


Author(s):  
Mohammad Hanan Bhat

: Plant health monitoring has been a significant field of research since a very long time. The scope of this research work conducted lies in the vast domain of plant pathology with its applications extending in the field of agriculture production monitoring to forest health monitoring. It deals with the data collection techniques based on IOT, pre-processing and post-processing of Image dataset and identification of disease using deep learning model. Therefore, providing a multi-modal end-to-end approach for plant health monitoring. This paper reviews the various methods used for monitoring plant health remotely in a non-invasive manner. An end-to-end low cost framework has been proposed for monitoring plant health by using IOT based data collection methods and cloud computing for a single-point-of-contact for the data storage and processing. The cloud agent gateway connects the devices and collects the data from sensors to ensure a single source of truth. Further, the deep learning computational infrastructure provided by the public cloud infrastructure is exploited to train the image dataset and derive the plant health status


2021 ◽  
Vol 2114 (1) ◽  
pp. 012067
Author(s):  
Ruba R. Nori ◽  
Rabah N. Farhan ◽  
Safaa Hussein Abed

Abstract Novel algorithm for fire detection has been introduced. CNN based System localization of fire for real time applications was proposed. Deep learning algorithms shows excellent results in a way that it accuracy reaches very high accuracy for fire image dataset. Yolo is a superior deep learning algorithm that is capable of detect and localize fires in real time. The luck of image dataset force us to limit the system in binary classification test. Proposed model was tested on dataset gathered from the internet. In this article, we built an automated alert system integrating multiple sensors and state-of-the art deep learning algorithms, which have a limited number of false positive elements and which provide our prototype robot with reasonable accuracy in real-time data and as little as possible to track and record fire events as soon as possible.


2020 ◽  
Vol 10 (14) ◽  
pp. 4913
Author(s):  
Tin Kramberger ◽  
Božidar Potočnik

Currently there is no publicly available adequate dataset that could be used for training Generative Adversarial Networks (GANs) on car images. All available car datasets differ in noise, pose, and zoom levels. Thus, the objective of this work was to create an improved car image dataset that would be better suited for GAN training. To improve the performance of the GAN, we coupled the LSUN and Stanford car datasets. A new merged dataset was then pruned in order to adjust zoom levels and reduce the noise of images. This process resulted in fewer images that could be used for training, with increased quality though. This pruned dataset was evaluated by training the StyleGAN with original settings. Pruning the combined LSUN and Stanford datasets resulted in 2,067,710 images of cars with less noise and more adjusted zoom levels. The training of the StyleGAN on the LSUN-Stanford car dataset proved to be superior to the training with just the LSUN dataset by 3.7% using the Fréchet Inception Distance (FID) as a metric. Results pointed out that the proposed LSUN-Stanford car dataset is more consistent and better suited for training GAN neural networks than other currently available large car datasets.


Author(s):  
Xiangbin Liu ◽  
Jiesheng He ◽  
Liping Song ◽  
Shuai Liu ◽  
Gautam Srivastava

With the rapid development of Artificial Intelligence (AI), deep learning has increasingly become a research hotspot in various fields, such as medical image classification. Traditional deep learning models use Bilinear Interpolation when processing classification tasks of multi-size medical image dataset, which will cause the loss of information of the image, and then affect the classification effect. In response to this problem, this work proposes a solution for an adaptive size deep learning model. First, according to the characteristics of the multi-size medical image dataset, the optimal size set module is proposed in combination with the unpooling process. Next, an adaptive deep learning model module is proposed based on the existing deep learning model. Then, the model is fused with the size fine-tuning module used to process multi-size medical images to obtain a solution of the adaptive size deep learning model. Finally, the proposed solution model is applied to the pneumonia CT medical image dataset. Through experiments, it can be seen that the model has strong robustness, and the classification effect is improved by about 4% compared with traditional algorithms.


Sign in / Sign up

Export Citation Format

Share Document