scholarly journals High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 899 ◽  
Author(s):  
Veli Ilci ◽  
Charles Toth

Recent developments in sensor technologies such as Global Navigation Satellite Systems (GNSS), Inertial Measurement Unit (IMU), Light Detection and Ranging (LiDAR), radar, and camera have led to emerging state-of-the-art autonomous systems, such as driverless vehicles or UAS (Unmanned Airborne Systems) swarms. These technologies necessitate the use of accurate object space information about the physical environment around the platform. This information can be generally provided by the suitable selection of the sensors, including sensor types and capabilities, the number of sensors, and their spatial arrangement. Since all these sensor technologies have different error sources and characteristics, rigorous sensor modeling is needed to eliminate/mitigate errors to obtain an accurate, reliable, and robust integrated solution. Mobile mapping systems are very similar to autonomous vehicles in terms of being able to reconstruct the environment around the platforms. However, they differ a lot in operations and objectives. Mobile mapping vehicles use professional grade sensors, such as geodetic grade GNSS, tactical grade IMU, mobile LiDAR, and metric cameras, and the solution is created in post-processing. In contrast, autonomous vehicles use simple/inexpensive sensors, require real-time operations, and are primarily interested in identifying and tracking moving objects. In this study, the main objective was to assess the performance potential of autonomous vehicle sensor systems to obtain high-definition maps based on only using Velodyne sensor data for creating accurate point clouds. In other words, no other sensor data were considered in this investigation. The results have confirmed that cm-level accuracy can be achieved.

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6536
Author(s):  
Cheng-Wei Peng ◽  
Chen-Chien Hsu ◽  
Wei-Yen Wang

Survey-grade Lidar brands have commercialized Lidar-based mobile mapping systems (MMSs) for several years now. With this high-end equipment, the high-level accuracy quality of point clouds can be ensured, but unfortunately, their high cost has prevented practical implementation in autonomous driving from being affordable. As an attempt to solve this problem, we present a cost-effective MMS to generate an accurate 3D color point cloud for autonomous vehicles. Among the major processes for color point cloud reconstruction, we first synchronize the timestamps of each sensor. The calibration process between camera and Lidar is developed to obtain the translation and rotation matrices, based on which color attributes can be composed into the corresponding Lidar points. We also employ control points to adjust the point cloud for fine tuning the absolute position. To overcome the limitation of Global Navigation Satellite System/Inertial Measurement Unit (GNSS/IMU) positioning system, we utilize Normal Distribution Transform (NDT) localization to refine the trajectory to solve the multi-scan dispersion issue. Experimental results show that the color point cloud reconstructed by the proposed MMS has a position error in centimeter-level accuracy, meeting the requirement of high definition (HD) maps for autonomous driving usage.


2016 ◽  
Author(s):  
Georg Tanzmeister

This dissertation is focused on the environment model for automated vehicles. A reliable model of the local environment available in real-time is a prerequisite to enable almost any useful ­activity performed by a robot, such as planning motions to fulfill tasks. It is particularly important in safety critical applications, such as for autonomous vehicles in regular traffic. In this thesis, novel concepts for local mapping, tracking, the detection of principal moving directions, cost evaluations in motion planning, and road course estimation have been developed. An object- and sensor-independent grid representation forms the basis of all presented methods enabling a generic and robust estimation of the environment. All approaches have been evaluated with sensor data from real road scenarios, and their performance has been experimentally demonstrated with a test vehicle. ...


2020 ◽  
Vol 31 (1) ◽  
pp. 33-41

One of the most famous technologies for the autonomous vehicle was using a scan matching algorithm, in which a high-definition 3D map created by the LIDAR sensor plays a significantly important role in localizing and path planning. Within this manuscript, a novel way of finding the effect from the Inertial Measurement Unit (IMU) on creating a high-definition 3D map from the LIDAR sensor was investigated. The collection data system was first ever created and collected in Vietnam. The results show that the normal distributions transform shows very good performance for creating the HD 3D map with have IMU sensor. On the other hand, without IMU the accuracy and the robustness of the creating map were reduced especially in the non-flat area. This manuscript will start the evolution of preparation for autonomous vehicles in Vietnam as well as contribute to the autonomous vehicle research society in the world.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2952 ◽  
Author(s):  
Xingxing Guang ◽  
Yanbin Gao ◽  
Henry Leung ◽  
Pan Liu ◽  
Guangchun Li

The strapdown inertial navigation system (SINS) is widely used in autonomous vehicles. However, the random drift error of gyroscope leads to serious accumulated navigation errors during long continuous operation of SINS alone. In this paper, we propose to combine the Inertial Measurement Unit (IMU) data with the line feature parameters from a camera to improve the navigation accuracy. The proposed method can also maintain the autonomy of the navigation system. Experimental results show that the proposed inertial-visual navigation system can mitigate the SINS drift and improve the accuracy, stability, and reliability of the navigation system.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3928 ◽  
Author(s):  
Weisong Wen ◽  
Li-Ta Hsu ◽  
Guohao Zhang

Robust and lane-level positioning is essential for autonomous vehicles. As an irreplaceable sensor, Light detection and ranging (LiDAR) can provide continuous and high-frequency pose estimation by means of mapping, on condition that enough environment features are available. The error of mapping can accumulate over time. Therefore, LiDAR is usually integrated with other sensors. In diverse urban scenarios, the environment feature availability relies heavily on the traffic (moving and static objects) and the degree of urbanization. Common LiDAR-based simultaneous localization and mapping (SLAM) demonstrations tend to be studied in light traffic and less urbanized area. However, its performance can be severely challenged in deep urbanized cities, such as Hong Kong, Tokyo, and New York with dense traffic and tall buildings. This paper proposes to analyze the performance of standalone NDT-based graph SLAM and its reliability estimation in diverse urban scenarios to further evaluate the relationship between the performance of LiDAR-based SLAM and scenario conditions. The normal distribution transform (NDT) is employed to calculate the transformation between frames of point clouds. Then, the LiDAR odometry is performed based on the calculated continuous transformation. The state-of-the-art graph-based optimization is used to integrate the LiDAR odometry measurements to implement optimization. The 3D building models are generated and the definition of the degree of urbanization based on Skyplot is proposed. Experiments are implemented in different scenarios with different degrees of urbanization and traffic conditions. The results show that the performance of the LiDAR-based SLAM using NDT is strongly related to the traffic condition and degree of urbanization. The best performance is achieved in the sparse area with normal traffic and the worse performance is obtained in dense urban area with 3D positioning error (summation of horizontal and vertical) gradients of 0.024 m/s and 0.189 m/s, respectively. The analyzed results can be a comprehensive benchmark for evaluating the performance of standalone NDT-based graph SLAM in diverse scenarios which is significant for multi-sensor fusion of autonomous vehicle.


Automation ◽  
2020 ◽  
Vol 1 (1) ◽  
pp. 17-32
Author(s):  
Thomas Kent ◽  
Anthony Pipe ◽  
Arthur Richards ◽  
Jim Hutchinson ◽  
Wolfgang Schuster

VENTURER was one of the first three UK government funded research and innovation projects on Connected Autonomous Vehicles (CAVs) and was conducted predominantly in the South West region of the country. A series of increasingly complex scenarios conducted in an urban setting were used to: (i) evaluate the technology created as a part of the project; (ii) systematically assess participant responses to CAVs and; (iii) inform the development of potential insurance models and legal frameworks. Developing this understanding contributed key steps towards facilitating the deployment of CAVs on UK roads. This paper aims to describe the VENTURER Project trials, their objectives and detail some of the key technologies used. Importantly we aim to introduce some informative challenges that were overcame and the subsequent project and technological lessons learned in a hope to help others plan and execute future CAV research. The project successfully integrated several technologies crucial to CAV development. These included, a Decision Making System using behaviour trees to make high level decisions; A pilot-control system to smoothly and comfortably turn plans into throttle and steering actuation; Sensing and perception systems to make sense of raw sensor data; Inter-CAV Wireless communication capable of demonstrating vehicle-to-vehicle communication of potential hazards. The closely coupled technology integration, testing and participant-focused trial schedule led to a greatly improved understanding of the engineering and societal barriers that CAV development faces. From a behavioural standpoint the importance of reliability and repeatability far outweighs a need for novel trajectories, while the sensor-to-perception capabilities are critical, the process of verification and validation is extremely time consuming. Additionally, the added capabilities that can be leveraged from inter-CAV communications shows the potential for improved road safety that could result. Importantly, to effectively conduct human factors experiments in the CAV sector under consistent and repeatable conditions, one needs to define a scripted and stable set of scenarios that uses reliable equipment and a controllable environmental setting. This requirement can often be at odds with making significant technology developments, and if both are part of a project’s goals then they may need to be separated from each other.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2084
Author(s):  
Junwon Lee ◽  
Kieun Lee ◽  
Aelee Yoo ◽  
Changjoo Moon

Self-driving cars, autonomous vehicles (AVs), and connected cars combine the Internet of Things (IoT) and automobile technologies, thus contributing to the development of society. However, processing the big data generated by AVs is a challenge due to overloading issues. Additionally, near real-time/real-time IoT services play a significant role in vehicle safety. Therefore, the architecture of an IoT system that collects and processes data, and provides services for vehicle driving, is an important consideration. In this study, we propose a fog computing server model that generates a high-definition (HD) map using light detection and ranging (LiDAR) data generated from an AV. The driving vehicle edge node transmits the LiDAR point cloud information to the fog server through a wireless network. The fog server generates an HD map by applying the Normal Distribution Transform-Simultaneous Localization and Mapping(NDT-SLAM) algorithm to the point clouds transmitted from the multiple edge nodes. Subsequently, the coordinate information of the HD map generated in the sensor frame is converted to the coordinate information of the global frame and transmitted to the cloud server. Then, the cloud server creates an HD map by integrating the collected point clouds using coordinate information.


2020 ◽  
Vol 5 (7) ◽  
pp. 55 ◽  
Author(s):  
Luigi Barazzetti ◽  
Mattia Previtali ◽  
Marco Scaioni

Building Information Modeling (BIM) has a crucial role in smart road applications, not only limited to the design and construction stages, but also to traffic monitoring, autonomous vehicle navigation, road condition assessment, and real-time data delivery to drivers, among others. Point clouds collected through LiDAR are a powerful solution to capture as-built conditions, notwithstanding the lack of commercial tools able to automatically reconstruct road geometry in a BIM environment. This paper illustrates a two-step procedure in which roads are automatically detected and classified, providing GIS layers with basic road geometry that are turned into parametric BIM objects. The proposed system is an integrated BIM-GIS with a structure based on multiple proposals, in which a single project file can handle different versions of the model using a variable level of detail. The model is also refined by adding parametric elements for buildings and vegetation. Input data for the integrated BIM-GIS can also be existing cartographic layers or outputs generated with algorithms able to handle LiDAR data. This makes the generation of the BIM-GIS more flexible and not limited to the use of specific algorithms for point cloud processing.


2013 ◽  
Vol 390 ◽  
pp. 506-511
Author(s):  
Rashid Iqbal ◽  
Zhong Jian Li ◽  
Khan Badshah

Inertial measurement unit (IMU) has been widely used for autonomous vehicles navigation. The accuracy of IMU specifies the performance of the inertial navigation system (INS).The errors in the INS are mainly due to the IMU inaccuracies, initial alignment, computational errors and approximations in the system equations. These errors are further integrated over time due to the dead-reckoning nature of the INS, which leads to unacceptable results. These errors need an accurate estimation for high precision navigation. INS is integrated with Global Positioning System (GPS) to estimate the errors and enhance the navigation capability of the INS. Linearized Kalman Filter (LKF) is proposed for estimating the errors in the low cost INS using Loosely Coupled integration approach, which is opted for its simplicity and robustness. Prediction part of the LKF is used during the GPS lag for errors estimation, which is found very effective for low cost sensors. The resulting GPS-INS integration algorithm is evaluated on simulated Autonomous vehicle trajectory, generated from 6-DOF model. The integrated system limits the attitude errors less than 0.1 deg and velocity errors of the order of 0.003 meter per second. Furthermore, it provides an optimal navigation solution than can be achieved from individual systems.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6586
Author(s):  
Andrzej Stateczny ◽  
Marta Wlodarczyk-Sielicka ◽  
Pawel Burdziakowski

Autonomous vehicle navigation has been at the center of several major developments, both in civilian and defense applications [...]


Sign in / Sign up

Export Citation Format

Share Document