scholarly journals Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

Sensors ◽  
2012 ◽  
Vol 12 (9) ◽  
pp. 12386-12404 ◽  
Author(s):  
Long Chen ◽  
Qingquan Li ◽  
Ming Li ◽  
Liang Zhang ◽  
Qingzhou Mao
2021 ◽  
Vol 10 (3) ◽  
pp. 42
Author(s):  
Mohammed Al-Nuaimi ◽  
Sapto Wibowo ◽  
Hongyang Qu ◽  
Jonathan Aitken ◽  
Sandor Veres

The evolution of driving technology has recently progressed from active safety features and ADAS systems to fully sensor-guided autonomous driving. Bringing such a vehicle to market requires not only simulation and testing but formal verification to account for all possible traffic scenarios. A new verification approach, which combines the use of two well-known model checkers: model checker for multi-agent systems (MCMAS) and probabilistic model checker (PRISM), is presented for this purpose. The overall structure of our autonomous vehicle (AV) system consists of: (1) A perception system of sensors that feeds data into (2) a rational agent (RA) based on a belief–desire–intention (BDI) architecture, which uses a model of the environment and is connected to the RA for verification of decision-making, and (3) a feedback control systems for following a self-planned path. MCMAS is used to check the consistency and stability of the BDI agent logic during design-time. PRISM is used to provide the RA with the probability of success while it decides to take action during run-time operation. This allows the RA to select movements of the highest probability of success from several generated alternatives. This framework has been tested on a new AV software platform built using the robot operating system (ROS) and virtual reality (VR) Gazebo Simulator. It also includes a parking lot scenario to test the feasibility of this approach in a realistic environment. A practical implementation of the AV system was also carried out on the experimental testbed.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4350 ◽  
Author(s):  
Julie Foucault ◽  
Suzanne Lesecq ◽  
Gabriela Dudnik ◽  
Marc Correvon ◽  
Rosemary O’Keeffe ◽  
...  

Environment perception is crucial for the safe navigation of vehicles and robots to detect obstacles in their surroundings. It is also of paramount interest for navigation of human beings in reduced visibility conditions. Obstacle avoidance systems typically combine multiple sensing technologies (i.e., LiDAR, radar, ultrasound and visual) to detect various types of obstacles under different lighting and weather conditions, with the drawbacks of a given technology being offset by others. These systems require powerful computational capability to fuse the mass of data, which limits their use to high-end vehicles and robots. INSPEX delivers a low-power, small-size and lightweight environment perception system that is compatible with portable and/or wearable applications. This requires miniaturizing and optimizing existing range sensors of different technologies to meet the user’s requirements in terms of obstacle detection capabilities. These sensors consist of a LiDAR, a time-of-flight sensor, an ultrasound and an ultra-wideband radar with measurement ranges respectively of 10 m, 4 m, 2 m and 10 m. Integration of a data fusion technique is also required to build a model of the user’s surroundings and provide feedback about the localization of harmful obstacles. As primary demonstrator, the INSPEX device will be fixed on a white cane.


Author(s):  
Ran Duan ◽  
Shuangyue Yu ◽  
Guang Yue ◽  
Richard Foulds ◽  
Chen Feng ◽  
...  

Wearable environment perception system has the great potential for improving the autonomous control of mobility aids [1]. A visual perception system could provide abundant information of surroundings to assist the task-oriented control such as navigation, obstacle avoidance, object detection, etc., which are essential functions for the wearers who are visually impaired or blind [2, 3, 4]. Moreover, a vision-based terrain sensing is a critical input to the decision-making for the intelligent control system. Especially for the users who find difficulties in manually achieving a seamless control model transition.


2021 ◽  
Author(s):  
Rio Ariesta Sasmono ◽  
Muhammad Iqbal Anggoro Agung ◽  
Yul Yunazwin Nazaruddin ◽  
Joshua Abel Oktavianus ◽  
Gilbert Tjahjono

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Yuren Chen ◽  
Xinyi Xie ◽  
Bo Yu ◽  
Yi Li ◽  
Kunhui Lin

The multitarget vehicle tracking and motion state estimation are crucial for controlling the host vehicle accurately and preventing collisions. However, current multitarget tracking methods are inconvenient to deal with multivehicle issues due to the dynamically complex driving environment. Driving environment perception systems, as an indispensable component of intelligent vehicles, have the potential to solve this problem from the perspective of image processing. Thus, this study proposes a novel driving environment perception system of intelligent vehicles by using deep learning methods to track multitarget vehicles and estimate their motion states. Firstly, a panoramic segmentation neural network that supports end-to-end training is designed and implemented, which is composed of semantic segmentation and instance segmentation. A depth calculation model of the driving environment is established by adding a depth estimation branch to the feature extraction and fusion module of the panoramic segmentation network. These deep neural networks are trained and tested in the Mapillary Vistas Dataset and the Cityscapes Dataset, and the results showed that these methods performed well with high recognition accuracy. Then, Kalman filtering and Hungarian algorithm are used for the multitarget vehicle tracking and motion state estimation. The effectiveness of this method is tested by a simulation experiment, and results showed that the relative relation (i.e., relative speed and distance) between multiple vehicles can be estimated accurately. The findings of this study can contribute to the development of intelligent vehicles to alert drivers to possible danger, assist drivers’ decision-making, and improve traffic safety.


Integration ◽  
2017 ◽  
Vol 59 ◽  
pp. 148-156 ◽  
Author(s):  
Weijing Shi ◽  
Mohamed Baker Alawieh ◽  
Xin Li ◽  
Huafeng Yu

2021 ◽  
Vol 10 (4) ◽  
pp. 66
Author(s):  
Abderraouf Khezaz ◽  
Manolo Dulva Hina ◽  
Hongyu Guan  ◽  
Amar Ramdane-Cherif 

An autonomous vehicle relies on sensors in order to perceive its surroundings. However, there are multiple causes that would hinder a sensor’s proper functioning, such as bad weather or lighting conditions. Studies have shown that rainfall and fog lead to a reduced visibility, which is one of the main causes of accidents. This work proposes the use of a drone in order to enhance the vehicle’s perception, making use of both embedded sensors and its advantageous 3D positioning. The environment perception and vehicle/Unmanned Aerial Vehicle (UAV) interactions are managed by a knowledge base in the form of an ontology, and logical rules are used in order to detect and infer the environmental context and UAV management. The model was tested and validated in a simulation made on Unity.


Sign in / Sign up

Export Citation Format

Share Document