scholarly journals Multitarget Vehicle Tracking and Motion State Estimation Using a Novel Driving Environment Perception System of Intelligent Vehicles

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Yuren Chen ◽  
Xinyi Xie ◽  
Bo Yu ◽  
Yi Li ◽  
Kunhui Lin

The multitarget vehicle tracking and motion state estimation are crucial for controlling the host vehicle accurately and preventing collisions. However, current multitarget tracking methods are inconvenient to deal with multivehicle issues due to the dynamically complex driving environment. Driving environment perception systems, as an indispensable component of intelligent vehicles, have the potential to solve this problem from the perspective of image processing. Thus, this study proposes a novel driving environment perception system of intelligent vehicles by using deep learning methods to track multitarget vehicles and estimate their motion states. Firstly, a panoramic segmentation neural network that supports end-to-end training is designed and implemented, which is composed of semantic segmentation and instance segmentation. A depth calculation model of the driving environment is established by adding a depth estimation branch to the feature extraction and fusion module of the panoramic segmentation network. These deep neural networks are trained and tested in the Mapillary Vistas Dataset and the Cityscapes Dataset, and the results showed that these methods performed well with high recognition accuracy. Then, Kalman filtering and Hungarian algorithm are used for the multitarget vehicle tracking and motion state estimation. The effectiveness of this method is tested by a simulation experiment, and results showed that the relative relation (i.e., relative speed and distance) between multiple vehicles can be estimated accurately. The findings of this study can contribute to the development of intelligent vehicles to alert drivers to possible danger, assist drivers’ decision-making, and improve traffic safety.

2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


Author(s):  
Chao Xie ◽  
Wengang Zhou ◽  
Weiping Ding ◽  
Houqiang Li ◽  
Weiping Li

2021 ◽  
Vol 69 (6) ◽  
pp. 511-523
Author(s):  
Henrietta Lengyel ◽  
Viktor Remeli ◽  
Zsolt Szalay

Abstract The emergence of new autonomous driving systems and functions – in particular, systems that base their decisions on the output of machine learning subsystems responsible for environment perception – brings a significant change in the risks to the safety and security of transportation. These kinds of Advanced Driver Assistance Systems are vulnerable to new types of malicious attacks, and their properties are often not well understood. This paper demonstrates the theoretical and practical possibility of deliberate physical adversarial attacks against deep learning perception systems in general, with a focus on safety-critical driver assistance applications such as traffic sign classification in particular. Our newly developed traffic sign stickers are different from other similar methods insofar that they require no special knowledge or precision in their creation and deployment, thus they present a realistic and severe threat to traffic safety and security. In this paper we preemptively point out the dangers and easily exploitable weaknesses that current and future systems are bound to face.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5765 ◽  
Author(s):  
Seiya Ito ◽  
Naoshi Kaneko ◽  
Kazuhiko Sumi

This paper proposes a novel 3D representation, namely, a latent 3D volume, for joint depth estimation and semantic segmentation. Most previous studies encoded an input scene (typically given as a 2D image) into a set of feature vectors arranged over a 2D plane. However, considering the real world is three-dimensional, this 2D arrangement reduces one dimension and may limit the capacity of feature representation. In contrast, we examine the idea of arranging the feature vectors in 3D space rather than in a 2D plane. We refer to this 3D volumetric arrangement as a latent 3D volume. We will show that the latent 3D volume is beneficial to the tasks of depth estimation and semantic segmentation because these tasks require an understanding of the 3D structure of the scene. Our network first constructs an initial 3D volume using image features and then generates latent 3D volume by passing the initial 3D volume through several 3D convolutional layers. We apply depth regression and semantic segmentation by projecting the latent 3D volume onto a 2D plane. The evaluation results show that our method outperforms previous approaches on the NYU Depth v2 dataset.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2251 ◽  
Author(s):  
Jikai Liu ◽  
Pengfei Wang ◽  
Fusheng Zha ◽  
Wei Guo ◽  
Zhenyu Jiang ◽  
...  

The motion state of a quadruped robot in operation changes constantly. Due to the drift caused by the accumulative error, the function of the inertial measurement unit (IMU) will be limited. Even though multi-sensor fusion technology is adopted, the quadruped robot will lose its ability to respond to state changes after a while because the gain tends to be constant. To solve this problem, this paper proposes a strong tracking mixed-degree cubature Kalman filter (STMCKF) method. According to system characteristics of the quadruped robot, this method makes fusion estimation of forward kinematics and IMU track. The combination mode of traditional strong tracking cubature Kalman filter (TSTCKF) and strong tracking is improved through demonstration. A new method for calculating fading factor matrix is proposed, which reduces sampling times from three to one, saving significantly calculation time. At the same time, the state estimation accuracy is improved from the third-degree accuracy of Taylor series expansion to fifth-degree accuracy. The proposed algorithm can automatically switch the working mode according to real-time supervision of the motion state and greatly improve the state estimation performance of quadruped robot system, exhibiting strong robustness and excellent real-time performance. Finally, a comparative study of STMCKF and the extended Kalman filter (EKF) that is commonly used in quadruped robot system is carried out. Results show that the method of STMCKF has high estimation accuracy and reliable ability to cope with sudden changes, without significantly increasing the calculation time, indicating the correctness of the algorithm and its great application value in quadruped robot system.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4350 ◽  
Author(s):  
Julie Foucault ◽  
Suzanne Lesecq ◽  
Gabriela Dudnik ◽  
Marc Correvon ◽  
Rosemary O’Keeffe ◽  
...  

Environment perception is crucial for the safe navigation of vehicles and robots to detect obstacles in their surroundings. It is also of paramount interest for navigation of human beings in reduced visibility conditions. Obstacle avoidance systems typically combine multiple sensing technologies (i.e., LiDAR, radar, ultrasound and visual) to detect various types of obstacles under different lighting and weather conditions, with the drawbacks of a given technology being offset by others. These systems require powerful computational capability to fuse the mass of data, which limits their use to high-end vehicles and robots. INSPEX delivers a low-power, small-size and lightweight environment perception system that is compatible with portable and/or wearable applications. This requires miniaturizing and optimizing existing range sensors of different technologies to meet the user’s requirements in terms of obstacle detection capabilities. These sensors consist of a LiDAR, a time-of-flight sensor, an ultrasound and an ultra-wideband radar with measurement ranges respectively of 10 m, 4 m, 2 m and 10 m. Integration of a data fusion technique is also required to build a model of the user’s surroundings and provide feedback about the localization of harmful obstacles. As primary demonstrator, the INSPEX device will be fixed on a white cane.


2018 ◽  
Vol 36 ◽  
pp. 465-471 ◽  
Author(s):  
Irina Makarova ◽  
Eduard Mukhametdinov ◽  
Eduard Tsybunov

Author(s):  
Ran Duan ◽  
Shuangyue Yu ◽  
Guang Yue ◽  
Richard Foulds ◽  
Chen Feng ◽  
...  

Wearable environment perception system has the great potential for improving the autonomous control of mobility aids [1]. A visual perception system could provide abundant information of surroundings to assist the task-oriented control such as navigation, obstacle avoidance, object detection, etc., which are essential functions for the wearers who are visually impaired or blind [2, 3, 4]. Moreover, a vision-based terrain sensing is a critical input to the decision-making for the intelligent control system. Especially for the users who find difficulties in manually achieving a seamless control model transition.


Sign in / Sign up

Export Citation Format

Share Document