scholarly journals A Rigorous Observation Model for the Risley Prism-Based Livox Mid-40 Lidar Sensor

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4722
Author(s):  
Ryan G. Brazeal ◽  
Benjamin E. Wilkinson ◽  
Hartwig H. Hochmair

Modern lidar sensors are continuing to decrease in size, weight, and cost, but the demand for fast, abundant, and high-accuracy lidar observations is only increasing. The Livox Mid-40 lidar sensor was designed for use within sense-and-avoid navigation systems for autonomous vehicles, but has also found adoption within aerial mapping systems. In order to characterize the overall quality of the point clouds from the Mid-40 sensor and enable sensor calibration, a rigorous model of the sensor’s raw observations is needed. This paper presents the development of an angular observation model for the Mid-40 sensor, and its application within an extended Kalman filter that uses the sensor’s data to estimate the model’s operating parameters, systematic errors, and the instantaneous prism rotation angles for the Risley prism optical steering mechanism. The analysis suggests that the Mid-40’s angular observations are more accurate than the specifications provided by the manufacturer. Additionally, it is shown that the prism rotation angles can be used within a planar constrained least-squares adjustment to theoretically improve the accuracy of the angular observations of the Mid-40 sensor.

Author(s):  
C. L. Glennie ◽  
P. J. Hartzell

Abstract. A number of low-cost, small form factor, high resolution lidar sensors have recently been commercialized in an effort to fill the growing needs for lidar sensors on autonomous vehicles. These lidar sensors often report performance as range precision and angular accuracy, which are insufficient to characterize the overall quality of the point clouds returned by these sensors. Herein, a detailed geometric accuracy analysis of two representative autonomous sensors, the Ouster OSI-64 and the Livox Mid-40, is presented. The scanners were analyzed through a rigorous least squares adjustment of data from the two sensors using planar surface constraints. The analysis attempts to elucidate the overall point cloud accuracy and presence of systematic errors for the sensors over medium (< 40 m) ranges. The Livox Mid-40 sensor performance appears to be in conformance with the product specifications, with a ranging accuracy of approximately 2 cm. No significant systematic geometric errors were found in the acquired Mid-40 point clouds. The Ouster OSI-64 did not perform to the manufacturer specifications, with a ranging accuracy of 5.6 cm, which is nearly twice that stated by the manufacturer. Several of the individual lasers within the OSI-64’s bank of 64 lasers exhibited higher range noise than their counterparts, and examination of the residuals indicate a possible systematic error correlated with the horizontal encoder angle. This suggests that the Ouster laser may benefit from additional geometric calibration. Finally, both sensors suffered from an inability to accurately resolve edges and smaller features such as posts due to their large laser beam divergences.


2021 ◽  
Vol 5 (1) ◽  
pp. 59
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

Terrestrial laser scanners (TLS) capture a large number of 3D points rapidly, with high precision and spatial resolution. These scanners are used for applications as diverse as modeling architectural or engineering structures, but also high-resolution mapping of terrain. The noise of the observations cannot be assumed to be strictly corresponding to white noise: besides being heteroscedastic, correlations between observations are likely to appear due to the high scanning rate. Unfortunately, if the variance can sometimes be modeled based on physical or empirical considerations, the latter are more often neglected. Trustworthy knowledge is, however, mandatory to avoid the overestimation of the precision of the point cloud and, potentially, the non-detection of deformation between scans recorded at different epochs using statistical testing strategies. The TLS point clouds can be approximated with parametric surfaces, such as planes, using the Gauss–Helmert model, or the newly introduced T-splines surfaces. In both cases, the goal is to minimize the squared distance between the observations and the approximated surfaces in order to estimate parameters, such as normal vector or control points. In this contribution, we will show how the residuals of the surface approximation can be used to derive the correlation structure of the noise of the observations. We will estimate the correlation parameters using the Whittle maximum likelihood and use comparable simulations and real data to validate our methodology. Using the least-squares adjustment as a “filter of the geometry” paves the way for the determination of a correlation model for many sensors recording 3D point clouds.


2021 ◽  
Vol 7 (4) ◽  
pp. 61
Author(s):  
David Urban ◽  
Alice Caplier

As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network’s ability to adapt to new sceneries with different types of obstacles with supervised learning.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3928 ◽  
Author(s):  
Weisong Wen ◽  
Li-Ta Hsu ◽  
Guohao Zhang

Robust and lane-level positioning is essential for autonomous vehicles. As an irreplaceable sensor, Light detection and ranging (LiDAR) can provide continuous and high-frequency pose estimation by means of mapping, on condition that enough environment features are available. The error of mapping can accumulate over time. Therefore, LiDAR is usually integrated with other sensors. In diverse urban scenarios, the environment feature availability relies heavily on the traffic (moving and static objects) and the degree of urbanization. Common LiDAR-based simultaneous localization and mapping (SLAM) demonstrations tend to be studied in light traffic and less urbanized area. However, its performance can be severely challenged in deep urbanized cities, such as Hong Kong, Tokyo, and New York with dense traffic and tall buildings. This paper proposes to analyze the performance of standalone NDT-based graph SLAM and its reliability estimation in diverse urban scenarios to further evaluate the relationship between the performance of LiDAR-based SLAM and scenario conditions. The normal distribution transform (NDT) is employed to calculate the transformation between frames of point clouds. Then, the LiDAR odometry is performed based on the calculated continuous transformation. The state-of-the-art graph-based optimization is used to integrate the LiDAR odometry measurements to implement optimization. The 3D building models are generated and the definition of the degree of urbanization based on Skyplot is proposed. Experiments are implemented in different scenarios with different degrees of urbanization and traffic conditions. The results show that the performance of the LiDAR-based SLAM using NDT is strongly related to the traffic condition and degree of urbanization. The best performance is achieved in the sparse area with normal traffic and the worse performance is obtained in dense urban area with 3D positioning error (summation of horizontal and vertical) gradients of 0.024 m/s and 0.189 m/s, respectively. The analyzed results can be a comprehensive benchmark for evaluating the performance of standalone NDT-based graph SLAM in diverse scenarios which is significant for multi-sensor fusion of autonomous vehicle.


Author(s):  
Y. Yang ◽  
S. Song ◽  
C. Toth

Abstract. Place recognition or loop closure is a technique to recognize landmarks and/or scenes visited by a mobile sensing platform previously in an area. The technique is a key function for robustly practicing Simultaneous Localization and Mapping (SLAM) in any environment, including the global positioning system (GPS) denied environment by enabling to perform the global optimization to compensate the drift of dead-reckoning navigation systems. Place recognition in 3D point clouds is a challenging task which is traditionally handled with the aid of other sensors, such as camera and GPS. Unfortunately, visual place recognition techniques may be impacted by changes in illumination and texture, and GPS may perform poorly in urban areas. To mitigate this problem, state-of-art Convolutional Neural Networks (CNNs)-based 3D descriptors may be directly applied to 3D point clouds. In this work, we investigated the performance of different classification strategies utilizing a cutting-edge CNN-based 3D global descriptor (PointNetVLAD) for place recognition task on the Oxford RobotCar dataset.


Author(s):  
J. Schachtschneider ◽  
C. Brenner

Abstract. The development of automated and autonomous vehicles requires highly accurate long-term maps of the environment. Urban areas contain a large number of dynamic objects which change over time. Since a permanent observation of the environment is impossible and there will always be a first time visit of an unknown or changed area, a map of an urban environment needs to model such dynamics.In this work, we use LiDAR point clouds from a large long term measurement campaign to investigate temporal changes. The data set was recorded along a 20 km route in Hannover, Germany with a Mobile Mapping System over a period of one year in bi-weekly measurements. The data set covers a variety of different urban objects and areas, weather conditions and seasons. Based on this data set, we show how scene and seasonal effects influence the measurement likelihood, and that multi-temporal maps lead to the best positioning results.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1111 ◽  
Author(s):  
Juho Lee ◽  
Sungkwon Park

Recently, large amounts of data traffic from various sensors and image and navigation systems within vehicles are generated for autonomous driving. Broadband communication networks within vehicles have become necessary. New autonomous Ethernet networks are being considered as alternatives. The Ethernet-based in-vehicle network has been standardized in the IEEE 802.1 time-sensitive network (TSN) group since 2006. The Ethernet TSN will be revised and integrated into a subsequent version of IEEE 802.1Q-2018 published in 2018 when various new TSN-related standards are being newly revised and published. A TSN integrated environment simulator is developed in this paper to implement the main functions of the TSN standards that are being developed. This effort would minimize the performance gaps that can occur when the functions of these standards operate in an integrated environment. As part of this purpose, we analyzed the simulator to verify that the traffic for autonomous driving satisfies the TSN transmission requirements in the in-vehicle network (IVN) and the preemption (which is one of the main TSN functions) and reduces the overall End-to-End delay. An optimal guard band size for the preemption was also found for autonomous vehicles in our work. Finally, an IVN model for autonomous vehicles was designed and the performance test was conducted by configuring the traffic to be used for various sensors and electronic control units (ECUs).


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2084
Author(s):  
Junwon Lee ◽  
Kieun Lee ◽  
Aelee Yoo ◽  
Changjoo Moon

Self-driving cars, autonomous vehicles (AVs), and connected cars combine the Internet of Things (IoT) and automobile technologies, thus contributing to the development of society. However, processing the big data generated by AVs is a challenge due to overloading issues. Additionally, near real-time/real-time IoT services play a significant role in vehicle safety. Therefore, the architecture of an IoT system that collects and processes data, and provides services for vehicle driving, is an important consideration. In this study, we propose a fog computing server model that generates a high-definition (HD) map using light detection and ranging (LiDAR) data generated from an AV. The driving vehicle edge node transmits the LiDAR point cloud information to the fog server through a wireless network. The fog server generates an HD map by applying the Normal Distribution Transform-Simultaneous Localization and Mapping(NDT-SLAM) algorithm to the point clouds transmitted from the multiple edge nodes. Subsequently, the coordinate information of the HD map generated in the sensor frame is converted to the coordinate information of the global frame and transmitted to the cloud server. Then, the cloud server creates an HD map by integrating the collected point clouds using coordinate information.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1573 ◽  
Author(s):  
Haojie Liu ◽  
Kang Liao ◽  
Chunyu Lin ◽  
Yao Zhao ◽  
Meiqin Liu

LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.


Sign in / Sign up

Export Citation Format

Share Document