scholarly journals Real-Time Mine Road Boundary Detection and Tracking for Autonomous Truck

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1121
Author(s):  
Xiaowei Lu ◽  
Yunfeng Ai ◽  
Bin Tian

Road boundary detection is an important part of the perception of the autonomous driving. It is difficult to detect road boundaries of unstructured roads because there are no curbs. There are no clear boundaries on mine roads to distinguish areas within the road boundary line and areas outside the road boundary line. This paper proposes a real-time road boundary detection and tracking method by a 3D-LIDAR sensor. The road boundary points are extracted from the detected elevated point clouds above the ground point cloud according to the spatial distance characteristics and the angular features. Road tracking is to predict and update the boundary point information in real-time, in order to prevent false and missed detection. The experimental verification of mine road data shows the accuracy and robustness of the proposed algorithm.

2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Jun Liu ◽  
Rui Zhang

Vehicle detection is a crucial task for autonomous driving and demands high accuracy and real-time speed. Considering that the current deep learning object detection model size is too large to be deployed on the vehicle, this paper introduces the lightweight network to modify the feature extraction layer of YOLOv3 and improve the remaining convolution structure, and the improved Lightweight YOLO network reduces the number of network parameters to a quarter. Then, the license plate is detected to calculate the actual vehicle width and the distance between the vehicles is estimated by the width. This paper proposes a detection and ranging fusion method based on two different focal length cameras to solve the problem of difficult detection and low accuracy caused by a small license plate when the distance is far away. The experimental results show that the average precision and recall of the Lightweight YOLO trained on the self-built dataset is 4.43% and 3.54% lower than YOLOv3, respectively, but the computing speed of the network decreases 49 ms per frame. The road experiments in different scenes also show that the long and short focal length camera fusion ranging method dramatically improves the accuracy and stability of ranging. The mean error of ranging results is less than 4%, and the range of stable ranging can reach 100 m. The proposed method can realize real-time vehicle detection and ranging on the on-board embedded platform Jetson Xavier, which satisfies the requirements of automatic driving environment perception.


Author(s):  
Wael Farag ◽  

In this paper, a real-time road-Object Detection and Tracking (LR_ODT) method for autonomous driving is proposed. The method is based on the fusion of lidar and radar measurement data, where they are installed on the ego car, and a customized Unscented Kalman Filter (UKF) is employed for their data fusion. The merits of both devices are combined using the proposed fusion approach to precisely provide both pose and velocity information for objects moving in roads around the ego car. Unlike other detection and tracking approaches, the balanced treatment of both pose estimation accuracy and its real-time performance is the main contribution in this work. The proposed technique is implemented using the high-performance language C++ and utilizes highly optimized math and optimization libraries for best real-time performance. Simulation studies have been carried out to evaluate the performance of the LR_ODT for tracking bicycles, cars, and pedestrians. Moreover, the performance of the UKF fusion is compared to that of the Extended Kalman Filter fusion (EKF) showing its superiority. The UKF has outperformed the EKF on all test cases and all the state variable levels (-24% average RMSE). The employed fusion technique show how outstanding is the improvement in tracking performance compared to the use of a single device (-29% RMES with lidar and -38% RMSE with radar).


2021 ◽  
pp. 1-14
Author(s):  
Wael Farag

In this paper, based on the fusion of Lidar and Radar measurement data, a real-time road-Object Detection and Tracking (LR_ODT) method for autonomous driving is proposed. The lidar and radar devices are installed on the ego car, and a customized Unscented Kalman Filter (UKF) is used for their data fusion. Lidars are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. Therefore, the merits of both sensors are combined using the proposed fusion approach to provide both pose and velocity data for objects moving in roads precisely. The Grid-Based Density-Based Spatial Clustering of Applications with Noise (GB-DBSCAN) clustering algorithm is used to detect objects and estimate their centroids from the lidar and radar raw data. Then, the estimation of the object’s velocity as well as determining its corresponding geometrical shape is performed by the RANdom SAmple Consensus (RANSAC) algorithm. The proposed technique is implemented using the high-performance language C+⁣+ and utilizes highly optimized math and optimization libraries for best real-time performance. The performance of the UKF fusion is compared to that of the Extended Kalman Filter fusion (EKF) showing its superiority. Simulation studies have been carried out to evaluate the performance of the LR_ODT for tracking bicycles, cars, and pedestrians.


2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Márton Pál ◽  
Fanni Vörös ◽  
István Elek ◽  
Béla Kovács

<p><strong>Abstract.</strong> A self-driving car is a vehicle that is able to perceive its surroundings and navigate in it without human action. Radar sensors, lasers, computer vision and GPS technologies help it to drive individually (Figure 1). They interpret the sensed information to calculate routes and navigate between obstacles and traffic elements.</p><p>Sufficiently accurate navigation and information about the current position of the vehicle are indispensable for transport. These expectations are fulfilled in the case of a human driver: the knowledge on traffic rules and signs make possible to navigate through even difficult situations. Self-driving systems substitute humans by monitoring and evaluating the surrounding environment and its objects without the background information of the driver. This analysing process is vulnerable. Sudden or unexpected situations may occur but high precision navigation and background GPS databases can complement sensor-detected data.</p><p>The assistance of global navigation has been used in cars for decades. Drivers can easily plan their routes and reach their destination by using car GPS units. However, these devices do not provide accurate positioning: there may be a difference of several metres from the real location. Self-driving cars also use navigation to complement sensor data. Although there are already autonomous system tests on motorways and countryside roads, in densely built-in areas this technology faces complications due to accuracy problems. The dilution of precision (DOP) values can be extremely high in larger settlements because high buildings may hide southern sky (where satellite signs are sensed from on our latitude).</p><p>We can achieve centimetre-level accuracy (if the conditions are ideal) with geodesic RTK (real-time kinematic) GPS systems. This high-precision position data is derived from satellite-based positioning systems. Measurements of the phase of the signal’s carrier wave are real-time corrected by a single reference or an interpolated virtual station.</p><p>In this research we use RTK GPS technology in order to work out a spatial database. These measurements can also be less precise in dense cities, but there is time during fieldwork to try to eliminate inaccuracy. We have chosen a sample area in the inner city of Budapest, Hungary where we located all traffic signs, pedestrian crossings and other important elements. As self-driving cars need precise position data of these terrain objects, we have tried to work with a maximum error of a few decimetres.</p><p>We have examined online map providers if they have feasible data structure and some base data. The implemented structure is similar to OpenStreetMap DB, in which there are already some traffic lights in important crossings. With this preliminary test database, we would like to filter out dangerous situations. If the camera of the car does not see a traffic sign because of a tree or a truck, information about it will be available from the database. If a pedestrian crossing is hardly visible and the sensor does not recognize it, the background GIS data will warn the car that there may be inattentive people on the road.</p><p>A test application has also been developed (Figure 2.), in which our Postgres/Postgis database records have been inserted. In the next phase of the project we try to test our database in the traffic. We plan to drive through the sample area and observe the GPS accuracy in the recognition of the located signs.</p><p>This research aims to achieve higher safety in the field of autonomous driving. By having a refreshable cartographic GIS database in the memory of a self-driving car, there is a smaller chance of risking human life. However, the maintenance demands a high amount of work. Because of this we should concentrate only on the most important signs. Even the cars can be able to supervise the content of the database if there is a large number of them on the road. The frequent production and analysis of point clouds is also an option to get nearer to safe automatized traffic.</p>


Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 276 ◽  
Author(s):  
Jiyoung Jung ◽  
Sung-Ho Bae

The generation of digital maps with lane-level resolution is rapidly becoming a necessity, as semi- or fully-autonomous driving vehicles are now commercially available. In this paper, we present a practical real-time working prototype for road lane detection using LiDAR data, which can be further extended to automatic lane-level map generation. Conventional lane detection methods are limited to simple road conditions and are not suitable for complex urban roads with various road signs on the ground. Given a 3D point cloud scanned by a 3D LiDAR sensor, we categorized the points of the drivable region and distinguished the points of the road signs on the ground. Then, we developed an expectation-maximization method to detect parallel lines and update the 3D line parameters in real time, as the probe vehicle equipped with the LiDAR sensor moved forward. The detected and recorded line parameters were integrated to build a lane-level digital map with the help of a GPS/INS sensor. The proposed system was tested to generate accurate lane-level maps of two complex urban routes. The experimental results showed that the proposed system was fast and practical in terms of effectively detecting road lines and generating lane-level maps.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7933
Author(s):  
António Silva ◽  
Duarte Fernandes ◽  
Rafael Névoa ◽  
João Monteiro ◽  
Paulo Novais ◽  
...  

Research about deep learning applied in object detection tasks in LiDAR data has been massively widespread in recent years, achieving notable developments, namely in improving precision and inference speed performances. These improvements have been facilitated by powerful GPU servers, taking advantage of their capacity to train the networks in reasonable periods and their parallel architecture that allows for high performance and real-time inference. However, these features are limited in autonomous driving due to space, power capacity, and inference time constraints, and onboard devices are not as powerful as their counterparts used for training. This paper investigates the use of a deep learning-based method in edge devices for onboard real-time inference that is power-effective and low in terms of space-constrained demand. A methodology is proposed for deploying high-end GPU-specific models in edge devices for onboard inference, consisting of a two-folder flow: study model hyperparameters’ implications in meeting application requirements; and compression of the network for meeting the board resource limitations. A hybrid FPGA-CPU board is proposed as an effective onboard inference solution by comparing its performance in the KITTI dataset with computer performances. The achieved accuracy is comparable to the PC-based deep learning method with a plus that it is more effective for real-time inference, power limited and space-constrained purposes.


2021 ◽  
Vol 13 (24) ◽  
pp. 5066
Author(s):  
Mohammad Aldibaja ◽  
Naoki Suganuma

This paper proposes a unique Graph SLAM framework to generate precise 2.5D LIDAR maps in an XYZ plane. A node strategy was invented to divide the road into a set of nodes. The LIDAR point clouds are smoothly accumulated in intensity and elevation images in each node. The optimization process is decomposed into applying Graph SLAM on nodes’ intensity images for eliminating the ghosting effects of the road surface in the XY plane. This step ensures true loop-closure events between nodes and precise common area estimations in the real world. Accordingly, another Graph SLAM framework was designed to bring the nodes’ elevation images into the same Z-level by making the altitudinal errors in the common areas as small as possible. A robust cost function is detailed to properly constitute the relationships between nodes and generate the map in the Absolute Coordinate System. The framework is tested against an accurate GNSS/INS-RTK system in a very challenging environment of high buildings, dense trees and longitudinal railway bridges. The experimental results verified the robustness, reliability and efficiency of the proposed framework to generate accurate 2.5D maps with eliminating the relative and global position errors in XY and Z planes. Therefore, the generated maps significantly contribute to increasing the safety of autonomous driving regardless of the road structures and environmental factors.


2020 ◽  
Vol 13 (2) ◽  
pp. 265-274 ◽  
Author(s):  
Wael Farag

Background: Enabling fast and reliable lane-lines detection and tracking for advanced driving assistance systems and self-driving cars. Methods: The proposed technique is mainly a pipeline of computer vision algorithms that augment each other and take in raw RGB images to produce the required lane-line segments that represent the boundary of the road for the car. The main emphasis of the proposed technique in on simplicity and fast computation capability so that it can be embedded in affordable CPUs that are employed by ADAS systems. Results: Each used algorithm is described in details, implemented and its performance is evaluated using actual road images and videos captured by the front mounted camera of the car. The whole pipeline performance is also tested and evaluated on real videos. Conclusion: The evaluation of the proposed technique shows that it reliably detects and tracks road boundaries under various conditions.


Sign in / Sign up

Export Citation Format

Share Document