scholarly journals Obtaining World Coordinate Information of UAV in GNSS Denied Environments

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2241 ◽  
Author(s):  
Chengbin Chen ◽  
YaoYuan Tian ◽  
Liang Lin ◽  
SiFan Chen ◽  
HanWen Li ◽  
...  

GNSS information is vulnerable to external interference and causes failure when unmanned aerial vehicles (UAVs) are in a fully autonomous flight in complex environments such as high-rise parks and dense forests. This paper presents a pan-tilt-based visual servoing (PBVS) method for obtaining world coordinate information. The system is equipped with an inertial measurement unit (IMU), an air pressure sensor, a magnetometer, and a pan-tilt-zoom (PTZ) camera. In this paper, we explain the physical model and the application method of the PBVS system, which can be briefly summarized as follows. We track the operation target with a UAV carrying a camera and output the information about the UAV’s position and the angle between the PTZ and the anchor point. In this way, we can obtain the current absolute position information of the UAV with its absolute altitude collected by the height sensing unit and absolute geographic coordinate information and altitude information of the tracked target. We set up an actual UAV experimental environment. To meet the calculation requirements, some sensor data will be sent to the cloud through the network. Through the field tests, it can be concluded that the systematic deviation of the overall solution is less than the error of GNSS sensor equipment, and it can provide navigation coordinate information for the UAV in complex environments. Compared with traditional visual navigation systems, our scheme has the advantage of obtaining absolute, continuous, accurate, and efficient navigation information at a short distance (within 15 m from the target). This system can be used in scenarios that require autonomous cruise, such as self-powered inspections of UAVs, patrols in parks, etc.

Author(s):  
Chengbin Chen

The GNSS information is vulnerable to external interference and causes failure when unmanned aerial vehicles (UAVs) are in a fully autonomous flight in complex environments such as high-rise parks and dense forests. This paper presents a pan-tilt based visual servoing (PBVS) method for obtaining world coordinate information. The system is equipped with an Inertial Measurement Unit (IMU), an air pressure sensor, a magnetometer, and a pan-tilt-zoom(PTZ) camera. In this paper, we explain the physical model and the application method of the PBVS system which can be briefly summarized as follows. We track the operation target with a UAV carrying a camera and output the information about the UAV's position and the angle between the PTZ and the anchor point. In this way, we can obtain the current absolute position information of the UAV with its absolute altitude collected by the height sensing unit and absolute geographic coordinate information and altitude information of the tracked target. We have set up an actual UAV experimental environment. In order to meet the calculation requirements, some sensor data will be sent to the cloud through the network.Through the field tests, it can be concluded that the systematic deviation of the overall solution is less than the error of ordinary GNSS sensor equipment, and it can provide navigation coordinate information for the UAV in complex environments. Compared with traditional visual navigation systems, our scheme has the advantage of obtaining absolute, continuous, accurate and efficient navigation information in a short distance (within 15m from the target). This system can be used in scenarios that require autonomous cruise, such as self-powered inspections of UAVs, patrols in parks, etc.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1079 ◽  
Author(s):  
Di Liu ◽  
Hengjun Wang ◽  
Qingyuan Xia ◽  
Changhui Jiang

GNSS (global navigation satellite system) and SINS (strap-down inertial navigation system) integrated navigation systems have been the apparatus for providing reliable and stable position and velocity information (PV). Commonly, there are two solutions to improve the GNSS/SINS integration navigation system accuracy, i.e., employing GNSS with higher position accuracy in the integration system or utilizing the high-grade inertial measurement unit (IMU) to construct the integration system. However, technologies such as RTK (real-time kinematic) and PPP (precise point positioning) that improve GNSS positioning accuracy have higher costs and they cannot work under high dynamic environments. Also, an IMU with high accuracy will lead to a higher cost and larger volume, therefore, a low-cost method to enhance the GNSS/SINS integration accuracy is of great significance. In this paper, multiple receivers based on the GNSS/SINS integrated navigation system are proposed with the aim of providing more precise PV information. Since the chip-scale receivers are cheap, the deployment of multiple receivers in the GNSS/SINS integration will not significantly increase the cost. In addition, two different filtering methods with central and cascaded structure are employed to process the multiple receivers and SINS integration. In the centralized integration filter method, measurements from multiple receivers are directly processed to estimate the SINS errors state vectors. However, the computation load increases heavily due to the rising dimension of the measurement vector. Therefore, a cascaded integration filter structure is also employed to distribute the processing of the multiple receiver and SINS integration. In the cascaded processing method, each receiver is regarded as an individual “sensor”, and a standard federated Kalman filter (FKF) is implemented to obtain an optimal estimation of the navigation solutions. In this paper, a simulation and a field tests are carried out to assess the influence of the number of receivers on the PV accuracy. A detailed analysis of these position and velocity results is presented and the improvements in the PV accuracy demonstrate the effectiveness of the proposed method.


2018 ◽  
Vol 9 (1) ◽  
pp. 56 ◽  
Author(s):  
Chunlin Song ◽  
Xiaogang Wang ◽  
Naigang Cui

Visual–inertial odometry is an effective system for mobile robot navigation. This article presents an egomotion estimation method for a dual-sensor system consisting of a camera and an inertial measurement unit (IMU) based on the cubature information filter and H∞ filter. The intensity of the image was used as the measurement directly. The measurements from the two sensors were fused with a hybrid information filter in a tightly coupled way. The hybrid filter used the third-degree spherical-radial cubature rule in the time-update phase and the fifth-degree spherical simplex-radial cubature rule in the measurement-update phase for numerical stability. The robust H∞ filter was combined into the measurement-update phase of the cubature information filter framework for robustness toward non-Gaussian noises in the intensity measurements. The algorithm was evaluated on a common public dataset and compared to other visual navigation systems in terms of absolute and relative accuracy.


2012 ◽  
Vol 245 ◽  
pp. 323-329 ◽  
Author(s):  
Muhammad Ushaq ◽  
Jian Cheng Fang

Inertial navigation systems exhibit position errors that tend to grow with time in an unbounded mode. This degradation is due, in part, to errors in the initialization of the inertial measurement unit and inertial sensor imperfections such as accelerometer biases and gyroscope drifts. Mitigation to this growth and bounding the errors is to update the inertial navigation system periodically with external position (and/or velocity, attitude) fixes. The synergistic effect is obtained through external measurements updating the inertial navigation system using Kalman filter algorithm. It is a natural requirement that the inertial data and data from the external aids be combined in an optimal and efficient manner. In this paper an efficient method for integration of Strapdown Inertia Navigation System (SINS), Global Positioning System (GPS) and Doppler radar is presented using a centralized linear Kalman filter by treating vector measurements with uncorrelated errors as scalars. Two main advantages have been obtained with this improved scheme. First is the reduced computation time as the number of arithmetic computation required for processing a vector as successive scalar measurements is significantly less than the corresponding number of operations for vector measurement processing. Second advantage is the improved numerical accuracy as avoiding matrix inversion in the implementation of covariance equations improves the robustness of the covariance computations against round off errors.


2021 ◽  
Vol 29 (3) ◽  
pp. 52-68
Author(s):  
N.B. Vavilova ◽  
◽  
A.A. Golovan ◽  
A.V. Kozlov ◽  
I.A. Papusha ◽  
...  

We examine two aspects specific to complex data fusion algorithms in integrated strapdown inertial navigation systems aided by global positioning systems, with their inherent spatial separation between the GNSS antenna phase center and the inertial measurement unit, as well as with the timing skew between their measurements. The first aspect refers to modifications of mathematical models used in INS/GNSS integration. The second one relates to our experience in their application in onboard airborne navigation algorithms developed by Moscow Institute of Electromechanics and Automatics.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3270 ◽  
Author(s):  
Hao Cai ◽  
Zhaozheng Hu ◽  
Gang Huang ◽  
Dunyao Zhu ◽  
Xiaocong Su

Self-localization is a crucial task for intelligent vehicles. Existing localization methods usually require high-cost IMU (Inertial Measurement Unit) or expensive LiDAR sensors (e.g., Velodyne HDL-64E). In this paper, we propose a low-cost yet accurate localization solution by using a custom-level GPS receiver and a low-cost camera with the support of HD map. Unlike existing HD map-based methods, which usually requires unique landmarks within the sensed range, the proposed method utilizes common lane lines for vehicle localization by using Kalman filter to fuse the GPS, monocular vision, and HD map for more accurate vehicle localization. In the Kalman filter framework, the observations consist of two parts. One is the raw GPS coordinate. The other is the lateral distance between the vehicle and the lane, which is computed from the monocular camera. The HD map plays the role of providing reference position information and correlating the local lateral distance from the vision and the GPS coordinates so as to formulate a linear Kalman filter. In the prediction step, we propose using a data-driven motion model rather than a Kinematic model, which is more adaptive and flexible. The proposed method has been tested with both simulation data and real data collected in the field. The results demonstrate that the localization errors from the proposed method are less than half or even one-third of the original GPS positioning errors by using low cost sensors with HD map support. Experimental results also demonstrate that the integration of the proposed method into existing ones can greatly enhance the localization results.


2016 ◽  
Vol 62 ◽  
pp. 24-44 ◽  
Author(s):  
Amir H. Alavi ◽  
Hassene Hasni ◽  
Nizar Lajnef ◽  
Karim Chatti ◽  
Fred Faridazar

2015 ◽  
Vol 2015 ◽  
pp. 1-10
Author(s):  
Vadym Avrutov

The scalar method of fault diagnosis systems of the inertial measurement unit (IMU) is described. All inertial navigation systems consist of such IMU. The scalar calibration method is a base of the scalar method for quality monitoring and diagnostics. In accordance with scalar calibration method algorithms of fault diagnosis systems are developed. As a result of quality monitoring algorithm verification is implemented in the working capacity monitoring of IMU. A failure element determination is based on diagnostics algorithm verification and after that the reason for such failure is cleared. The process of verifications consists of comparison of the calculated estimations of biases, scale factor errors, and misalignments angles of sensors to their data sheet certificate, kept in internal memory of computer. As a result of such comparison the conclusion for working capacity of each IMU sensor can be made and also the failure sensor can be determined.


Data ◽  
2018 ◽  
Vol 4 (1) ◽  
pp. 4 ◽  
Author(s):  
Viacheslav Moskalenko ◽  
Alona Moskalenko ◽  
Artem Korobov ◽  
Viktor Semashko

Trainable visual navigation systems based on deep learning demonstrate potential for robustness of onboard camera parameters and challenging environment. However, a deep model requires substantial computational resources and large labelled training sets for successful training. Implementation of the autonomous navigation and training-based fast adaptation to the new environment for a compact drone is a complicated task. The article describes an original model and training algorithms adapted to the limited volume of labelled training set and constrained computational resource. This model consists of a convolutional neural network for visual feature extraction, extreme-learning machine for estimating the position displacement and boosted information-extreme classifier for obstacle prediction. To perform unsupervised training of the convolution filters with a growing sparse-coding neural gas algorithm, supervised learning algorithms to construct the decision rules with simulated annealing search algorithm used for finetuning are proposed. The use of complex criterion for parameter optimization of the feature extractor model is considered. The resulting approach performs better trajectory reconstruction than the well-known ORB-SLAM. In particular, for sequence 7 from the KITTI dataset, the translation error is reduced by nearly 65.6% under the frame rate 10 frame per second. Besides, testing on the independent TUM sequence shot outdoors produces a translation error not exceeding 6% and a rotation error not exceeding 3.68 degrees per 100 m. Testing was carried out on the Raspberry Pi 3+ single-board computer.


Sign in / Sign up

Export Citation Format

Share Document