Simulation of a multispectral, multicamera, off-road autonomous vehicle perception system with Virtual Autonomous Navigation Environment (VANE)

2015 ◽  
Author(s):  
David R. Chambers ◽  
Jason Gassaway ◽  
Christopher Goodin ◽  
Phillip J. Durst
2021 ◽  
Vol 10 (3) ◽  
pp. 42
Author(s):  
Mohammed Al-Nuaimi ◽  
Sapto Wibowo ◽  
Hongyang Qu ◽  
Jonathan Aitken ◽  
Sandor Veres

The evolution of driving technology has recently progressed from active safety features and ADAS systems to fully sensor-guided autonomous driving. Bringing such a vehicle to market requires not only simulation and testing but formal verification to account for all possible traffic scenarios. A new verification approach, which combines the use of two well-known model checkers: model checker for multi-agent systems (MCMAS) and probabilistic model checker (PRISM), is presented for this purpose. The overall structure of our autonomous vehicle (AV) system consists of: (1) A perception system of sensors that feeds data into (2) a rational agent (RA) based on a belief–desire–intention (BDI) architecture, which uses a model of the environment and is connected to the RA for verification of decision-making, and (3) a feedback control systems for following a self-planned path. MCMAS is used to check the consistency and stability of the BDI agent logic during design-time. PRISM is used to provide the RA with the probability of success while it decides to take action during run-time operation. This allows the RA to select movements of the highest probability of success from several generated alternatives. This framework has been tested on a new AV software platform built using the robot operating system (ROS) and virtual reality (VR) Gazebo Simulator. It also includes a parking lot scenario to test the feasibility of this approach in a realistic environment. A practical implementation of the AV system was also carried out on the experimental testbed.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 297
Author(s):  
Ali Marzoughi ◽  
Andrey V. Savkin

We study problems of intercepting single and multiple invasive intruders on a boundary of a planar region by employing a team of autonomous unmanned surface vehicles. First, the problem of intercepting a single intruder has been studied and then the proposed strategy has been applied to intercepting multiple intruders on the region boundary. Based on the proposed decentralised motion control algorithm and decision making strategy, each autonomous vehicle intercepts any intruder, which tends to leave the region by detecting the most vulnerable point of the boundary. An efficient and simple mathematical rules based control algorithm for navigating the autonomous vehicles on the boundary of the see region is developed. The proposed algorithm is computationally simple and easily implementable in real life intruder interception applications. In this paper, we obtain necessary and sufficient conditions for the existence of a real-time solution to the considered problem of intruder interception. The effectiveness of the proposed method is confirmed by computer simulations with both single and multiple intruders.


2021 ◽  
Vol 2 ◽  
Author(s):  
Jeffery Petit ◽  
Camilo Charron ◽  
Franck Mars

Autonomous navigation becomes complex when it is performed in an environment that lacks road signs and includes a variety of users, including vulnerable pedestrians. This article deals with the perception of collision risk from the viewpoint of a passenger sitting in the driver's seat who has delegated the total control of their vehicle to an autonomous system. The proposed study is based on an experiment that used a fixed-base driving simulator. The study was conducted using a group of 20 volunteer participants. Scenarios were developed to simulate avoidance manoeuvres that involved pedestrians walking at 4.5 kph and an autonomous vehicle that was otherwise driving in a straight line at 30 kph. The main objective was to compare two systems of risk perception: These included subjective risk assessments obtained with an analogue handset provided to the participants and electrodermal activity (EDA) that was measured using skin conductance sensors. The relationship between these two types of measures, which possibly relates to the two systems of risk perception, is not unequivocally described in the literature. This experiment addresses this relationship by manipulating two factors: The time-to-collision (TTC) at the initiation of a pedestrian avoidance manoeuvre and the lateral offset left between a vehicle and a pedestrian. These manipulations of vehicle dynamics made it possible to simulate different safety margins regarding pedestrians during avoidance manoeuvres. The conditional dependencies between the two systems and the manipulated factors were studied using hybrid Bayesian networks. This relationship was inferred by selecting the best Bayesian network structure based on the Bayesian information criterion. The results demonstrate that the reduction of safety margins increases risk perception according to both types of indicators. However, the increase in subjective risk is more pronounced than the physiological response. While the indicators cannot be considered redundant, data modeling suggests that the two risk perception systems are not independent.


2021 ◽  
Author(s):  
Sven Gastauer ◽  
Jeffrey S. Ellen ◽  
Mark D. Ohman

<p><em>Zooglider</em> is an autonomous buoyancy-driven ocean glider designed and built by the Instrument Development Group at Scripps. <em>Zooglider</em> includes a low power camera with a telecentric lens for shadowgraph imaging and two custom active acoustics echosounders (operated at 200/1000 kHz).  A passive acoustic hydrophone records vocalizations from marine mammals, fishes, and ambient noise.  The imaging system (<em>Zoocam</em>) quantifies zooplankton and ‘marine snow’ as they flow through a sampling tunnel within a well-defined sampling volume. Other sensors include a pumped Conductivity-Temperature-Depth probe and Chl-<em>a</em> fluorometer.  An acoustic altimeter permits autonomous navigation across regions of abrupt seafloor topography, including submarine canyons and seamounts.  Vertical sampling resolution is typically 5 cm, maximum operating depth is ~500 m, and mission duration up to 50 days.  Adaptive sampling is enabled by telemetry of measurements at each surfacing.  Our post-deployment processing methodology classifies the optical images using advanced Deep Learning methods that utilize context metadata.  <em>Zooglider</em> permits in situ measurements of mesozooplankton and marine snow - and their natural, three dimensional orientation - in relation to other biotic and physical properties of the ocean water column.  <em>Zooglider</em> resolves micro-scale patches, which are important for predator-prey interactions and biogeochemical cycling. </p><p> </p>


2021 ◽  
Author(s):  
Rio Ariesta Sasmono ◽  
Muhammad Iqbal Anggoro Agung ◽  
Yul Yunazwin Nazaruddin ◽  
Joshua Abel Oktavianus ◽  
Gilbert Tjahjono

2021 ◽  
Vol 40 ◽  
pp. 03011
Author(s):  
Vighnesh Devane ◽  
Ganesh Sahane ◽  
Hritish Khairmode ◽  
Gaurav Datkhile

Lane detection is a developing technology that is implemented in vehicles to enable autonomous navigation. Most lane detection systems are designed for roads with proper structure relying on the existence of markings. The main shortcoming of these approaches is that they might give inaccurate results or not work at all in situations involving unclear markings or the absence of them. In this study one such approach for detecting lanes on an unmarked road is reviewed followed by an improved approach. Both the approaches are based on digital image processing techniques and purely work on vision or camera data. The main aim is to obtain a real time curve value to assist the driver/autonomous vehicle for taking required turns and not go off the road.


Author(s):  
Subbulakshmi T. ◽  
Balaji N.

This article presents the platform for autonomous vehicle architecture, navigation optimization and mobility services. The basic approach is to develop an intelligent agent to create a safety journey and redefine the world of transportation. The goal is to eliminate human driving errors and save human life from accidents. AI robots are a concept of future transportation with full automation and self-learning. Velodyne laser sensors are used for obstacle detection and autonomous navigation of ground vehicles and to create 3D images of the surround so that navigation and controls are optimized. In this article, existing system accessibility will be optimized by multiple features. The agent accessibility is improved, and users can access the vehicles through different ways like mobile apps, speech recognition and gestures. This article concentrates on the mobility services of autonomous vehicles.


Integration ◽  
2017 ◽  
Vol 59 ◽  
pp. 148-156 ◽  
Author(s):  
Weijing Shi ◽  
Mohamed Baker Alawieh ◽  
Xin Li ◽  
Huafeng Yu

Sign in / Sign up

Export Citation Format

Share Document