Using vision navigation and convolutional neural networks to provide absolute position aiding for ground vehicles

Author(s):  
Jonathan Ryan ◽  
Paul Muench
2020 ◽  
Vol 17 (9) ◽  
pp. 4364-4367
Author(s):  
Shreya Srinarasi ◽  
Seema Jahagirdar ◽  
Charan Renganathan ◽  
H. Mallika

The preliminary step in the navigation of Unmanned Vehicles is to detect and identify the horizon line. One method to locate the horizon and obstacles in an image is through a supervised learning, semantic segmentation algorithm using Neural Networks. Unmanned Aerial Vehicles (UAVs) are rapidly gaining prominence in military, commercial and civilian applications. For the safe navigation of UAVs, there poses a requirement for an accurate and efficient obstacle detection and avoidance. The position of the horizon and obstacles can also be used for adjusting flight parameters and estimating altitude. It can also be used for the navigation of Unmanned Ground Vehicles (UGV), by neglecting the part of the image above the horizon to reduce the processing time. Locating the horizon and identifying the various obstacles in an image can help in minimizing collisions and high costs due to failure of UAVs and UGVs. To achieve a robust and accurate system to aid navigation of autonomous vehicles, the efficiency and accuracy of Convolutional Neural Networks (CNN) and Recurrent-CNNs (RCNN) are analysed. It is observed via experimentation that the RCNN model classifies test images with higher accuracy.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Author(s):  
Edgar Medina ◽  
Roberto Campos ◽  
Jose Gabriel R. C. Gomes ◽  
Mariane R. Petraglia ◽  
Antonio Petraglia

Sign in / Sign up

Export Citation Format

Share Document