Vision Sensor Fusion for Autonomous Landing

Author(s):  
Takuma Nakamura ◽  
Stephen T. Haviland ◽  
Dmitry Bershadsky ◽  
Eric N. Johnson
Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1584 ◽  
Author(s):  
Yushan Li ◽  
Wenbo Zhang ◽  
Xuewu Ji ◽  
Chuanxiang Ren ◽  
Jian Wu

The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate the real lane in real-time during sensor jumps. This paper presents a lane compensation method based on multi-sensor fusion of global positioning system (GPS), inertial measurement unit (IMU) and vision sensors. In order to compensate the lane, the cubic polynomial function of the longitudinal distance is selected as the lane model. In this method, a Kalman filter is used to estimate vehicle velocity and yaw angle by GPS and IMU measurements, and a vehicle kinematics model is established to describe vehicle motion. It uses the geometric relationship between vehicle and relative lane motion at the current moment to solve the coefficient of the lane polynomial at the next moment. The simulation and vehicle test results show that the prediction information can compensate for the failure of the vision sensor, and has good real-time, robustness and accuracy.


2010 ◽  
Vol 16 (7) ◽  
pp. 639-645 ◽  
Author(s):  
Seung-Han Yang ◽  
Bong-Sob Song ◽  
Jae-Young Um

Author(s):  
William Rieken ◽  
Yoshihiro Yasumuro ◽  
Masataka Imura ◽  
Yoshitsugu Manabe ◽  
Kunihiro Chihara

2019 ◽  
Vol 91 (2) ◽  
pp. 241-248
Author(s):  
Michal Grzes ◽  
Maciej Slowik ◽  
Zdzisław Gosiewski

Purpose In relation to rapid development of possible applications of unmanned vehicles, new opportunities for their use are emerging. Among the most dynamic, we can distinguish package shipments, rescue and military applications, autonomous flights and unattended transportation. However, most of the UAV solutions have limitations related to their power supplies and the field of operation. Some of these restrictions can be overcome by implementing the cooperation between unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs). The purpose of this paper is to explore the problem of sensor fusion for autonomous landing of a UAV on the UGV by comparing the performance of precision landing algorithms using different sensor fusions to have precise and reliable information about the position and velocity. Design/methodology/approach The difficulties in this scenario, among others, are different coordination systems and necessity for sensor data from air and ground. The most suitable solution seems to be the use of widely available Global Navigational Satellite System (GNSS) receivers. Unfortunately, the position measurements obtained from cheap receivers are encumbered with errors when desiring precision. The different approaches are based on the usage of sensor fusion of Inertial Navigation System and image processing. However most of these systems are very vulnerable to lightning. Findings In this paper, methods based on an exchange of telemetry data and sensor fusion of GNSS, infrared markers detection and others are used. Different methods are compared. Originality/value The subject of sensor fusion and high-precision measurements in reference to the autonomous vehicle cooperation is very important because of the increasing popularity of these vehicles. The proposed solution is efficient to perform autonomous landing of UAV on the UGV.


2014 ◽  
Vol 10 (6) ◽  
pp. 864768 ◽  
Author(s):  
Harinadha Reddy Chintalapalli ◽  
Shashidhar Patil ◽  
Sanghun Nam ◽  
Sungsoo Park ◽  
Young Ho Chai
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document