scholarly journals Real-Time Monocular Vision System for UAV Autonomous Landing in Outdoor Low-Illumination Environments

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6226
Author(s):  
Shanggang Lin ◽  
Lianwen Jin ◽  
Ziwei Chen

Landing an unmanned aerial vehicle (UAV) autonomously and safely is a challenging task. Although the existing approaches have resolved the problem of precise landing by identifying a specific landing marker using the UAV’s onboard vision system, the vast majority of these works are conducted in either daytime or well-illuminated laboratory environments. In contrast, very few researchers have investigated the possibility of landing in low-illumination conditions by employing various active light sources to lighten the markers. In this paper, a novel vision system design is proposed to tackle UAV landing in outdoor extreme low-illumination environments without the need to apply an active light source to the marker. We use a model-based enhancement scheme to improve the quality and brightness of the onboard captured images, then present a hierarchical-based method consisting of a decision tree with an associated light-weight convolutional neural network (CNN) for coarse-to-fine landing marker localization, where the key information of the marker is extracted and reserved for post-processing, such as pose estimation and landing control. Extensive evaluations have been conducted to demonstrate the robustness, accuracy, and real-time performance of the proposed vision system. Field experiments across a variety of outdoor nighttime scenarios with an average luminance of 5 lx at the marker locations have proven the feasibility and practicability of the system.

2018 ◽  
Vol 14 (9) ◽  
pp. 155014771880065 ◽  
Author(s):  
Haiwen Yuan ◽  
Changshi Xiao ◽  
Supu Xiu ◽  
Wenqiang Zhan ◽  
Zhenyi Ye ◽  
...  

The vision-based localization of rotor unmanned aerial vehicles for autonomous landing is challenging because of the limited detection range. In this article, to extend the vision detection and measurement range, a hierarchical vision-based localization method is proposed for unmanned aerial vehicle autonomous landing. In such a hierarchical framework, the landing is defined into three phases: “Approaching,”“Adjustment,” and “Touchdown,” in which visual artificial features at different scales can be detected from the designed object pattern for unmanned aerial vehicle pose recovery. The corresponding feature detection and pose estimation algorithms are also presented. In the end, typical simulation and field experiments have been carried out to illustrate the proposed method. The results show that our hierarchical vision-based localization has the ability to a consecutive unmanned aerial vehicle localization in a wider working range from far to near, which is significant for autonomous landing.


2006 ◽  
Vol 3 (3) ◽  
pp. 171-177
Author(s):  
Z. Yuan ◽  
Z. Gong ◽  
J. Chen ◽  
J. Wu

This article introduces a real-time vision-based method for guided autonomous landing of a rotor-craft unmanned aerial vehicle. In the process of designing the pattern of landing target, we have fully considered how to make this easier for simplified identification and calibration. A linear algorithm was also applied using a three-dimensional structure estimation in real time. In addition, multiple-view vision technology is utilized to calibrate intrinsic parameters of camera online, so calibration prior to flight is unnecessary and the focus of camera can be changed freely in flight, thus upgrading the flexibility and practicality of the method.


2021 ◽  
Vol 11 (18) ◽  
pp. 8555
Author(s):  
Donghee Lee ◽  
Wooryong Park ◽  
Woochul Nam

Autonomous unmanned aerial vehicle (UAV) landing can be useful in multiple applications. Precise landing is a difficult task because of the significant navigation errors of the global positioning system (GPS). To overcome these errors and to realize precise landing control, various sensors have been installed on UAVs. However, this approach can be challenging for micro UAVs (MAVs) because strong thrust forces are required to carry multiple sensors. In this study, a new autonomous MAV landing system is proposed, in which a landing platform actively assists vehicle landing. In addition to the vision system of the UAV, a camera was installed on the platform to precisely control the MAV near the landing area. The platform was also designed with various types of equipment to assist the MAV in searching, approaching, alignment, and landing. Furthermore, a novel algorithm was developed for robust spherical object detection under different illumination conditions. To validate the proposed landing system and detection algorithm, 80 flight experiments were conducted using a DJI TELLO drone, which successfully landed on the platform in every trial with a small landing position average error of 2.7 cm.


Author(s):  
Giuseppe Placidi ◽  
Danilo Avola ◽  
Luigi Cinque ◽  
Matteo Polsinelli ◽  
Eleni Theodoridou ◽  
...  

AbstractVirtual Glove (VG) is a low-cost computer vision system that utilizes two orthogonal LEAP motion sensors to provide detailed 4D hand tracking in real–time. VG can find many applications in the field of human-system interaction, such as remote control of machines or tele-rehabilitation. An innovative and efficient data-integration strategy, based on the velocity calculation, for selecting data from one of the LEAPs at each time, is proposed for VG. The position of each joint of the hand model, when obscured to a LEAP, is guessed and tends to flicker. Since VG uses two LEAP sensors, two spatial representations are available each moment for each joint: the method consists of the selection of the one with the lower velocity at each time instant. Choosing the smoother trajectory leads to VG stabilization and precision optimization, reduces occlusions (parts of the hand or handling objects obscuring other hand parts) and/or, when both sensors are seeing the same joint, reduces the number of outliers produced by hardware instabilities. The strategy is experimentally evaluated, in terms of reduction of outliers with respect to a previously used data selection strategy on VG, and results are reported and discussed. In the future, an objective test set has to be imagined, designed, and realized, also with the help of an external precise positioning equipment, to allow also quantitative and objective evaluation of the gain in precision and, maybe, of the intrinsic limitations of the proposed strategy. Moreover, advanced Artificial Intelligence-based (AI-based) real-time data integration strategies, specific for VG, will be designed and tested on the resulting dataset.


2005 ◽  
Vol 56 (8-9) ◽  
pp. 831-842 ◽  
Author(s):  
Monica Carfagni ◽  
Rocco Furferi ◽  
Lapo Governi

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Sign in / Sign up

Export Citation Format

Share Document