Development of Automatic Steering System by Modeling Human Behavior Based on Optical Flow

2015 ◽  
Vol 27 (2) ◽  
pp. 136-145 ◽  
Author(s):  
Yuki Okafuji ◽  
◽  
Takanori Fukao ◽  
Hiroshi Inou ◽  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270002/03.jpg"" width=""300"" /> Manipulated optical flow field</div> Recently, various driving support systems have been developed to improve safety. However, because drivers occasionally feel that something is wrong, systems need to be designed based on information that drivers perceive. Therefore, we focused on optical flow, which is one of the visual information used by humans to improve driving feel. Humans are said to perceive the direction of self-motion from optical flow and also utilize it during driving. Applying the optical flow model to automatic steering systems, a human-oriented system might be able to be developed. In this paper, we derive the focus of expansion (FOE) in the frame of a camera that is the direction of self-motion in optical flow and propose a nonlinear control method based on the FOE. The effectiveness of the proposed method was verified through a vehicle simulation, and the results showed that the proposed method simulates human behavior. Based on these results, this approach may serve as a foundation of human-oriented system designs. </span>

Author(s):  
Kerstan S. Mork ◽  
Patricia R. DeLucia

Head-on collisions result in a substantial number of fatalities. To detect head-on collisions, drivers must judge effectively the direction or heading of their own vehicle in relation to the heading of oncoming vehicles. In our previous study, we used computer simulations of self-motion through a traffic scene to measure judgments about whether a head-on collision was imminent. Results suggested that judgments about head-on collision are affected by both the optical flow information provided by the centerline and the optical flow information provided by the oncoming car. The objective of the current study was to further examine the effect of different components of the optical flow pattern on judgments of head-on collisions. We measured judgments about head on collisions while manipulating local optical flow from the oncoming car and global optical flow from the background scenery. Our results suggest that visual information about the oncoming car's motion was more effective than visual information about self motion. The implication is that it may be beneficial for drivers to focus greater attention on the information about the oncoming car's motion in order to improve judgments about head-on collisions. Further research is needed to evaluate this possibility.


Author(s):  
Huiran Wang ◽  
Qidong Wang ◽  
Wuwei Chen ◽  
Linfeng Zhao ◽  
Dongkui Tan

To reduce the adverse effect of the functional insufficiency of the steering system on the accuracy of path tracking, a path tracking approach considering safety of the intended functionality is proposed by coordinating automatic steering and differential braking in this paper. The proposed method adopts a hierarchical architecture consisting of a coordinated control layer and an execution control layer. In coordinated control layer, an extension controller considering functional insufficiency of the steering system, tire force characteristics and vehicle driving stability is proposed to determine the weight coefficients of automatic steering and the differential braking, and a model predictive controller is designed to calculate the desired front wheel angle and additional yaw moment. In execution control layer, a H∞ steering angle controller considering external disturbances and parameter uncertainty is designed to track desired front wheel angle, and a braking force distribution module is used to determine the wheel cylinder pressure of the controlled wheels. Both simulation and experiment results show that the proposed method can overcome the functional insufficiency of the steering system and improve the accuracy of path tracking while maintaining the stability of the autonomous vehicle.


2010 ◽  
Vol 5 (8) ◽  
pp. 386-386
Author(s):  
W. B. Thompson ◽  
B. J. Mohler ◽  
S. H. Creem-Regehr
Keyword(s):  

2017 ◽  
Vol 30 (1) ◽  
pp. 65-90 ◽  
Author(s):  
Séamas Weech ◽  
Nikolaus F. Troje

Studies of the illusory sense of self-motion elicited by a moving visual surround (‘vection’) have revealed key insights about how sensory information is integrated. Vection usually occurs after a delay of several seconds following visual motion onset, whereas self-motion in the natural environment is perceived immediately. It has been suggested that this latency relates to the sensory mismatch between visual and vestibular signals at motion onset. Here, we tested three techniques with the potential to reduce sensory mismatch in order to shorten vection onset latency: noisy galvanic vestibular stimulation (GVS) and bone conducted vibration (BCV) at the mastoid processes, and body vibration applied to the lower back. In Experiment 1, we examined vection latency for wide field visual rotations about the roll axis and applied a burst of stimulation at the start of visual motion. Both GVS and BCV reduced vection latency by two seconds compared to the control condition, whereas body vibration had no effect on latency. In Experiment 2, the visual stimulus rotated about the pitch, roll, or yaw axis and we found a similar facilitation of vection by both BCV and GVS in each case. In a control experiment, we confirmed that air-conducted sound administered through headphones was not sufficient to reduce vection onset latency. Together the results suggest that noisy vestibular stimulation facilitates vection, likely due to an upweighting of visual information caused by a reduction in vestibular sensory reliability.


Sign in / Sign up

Export Citation Format

Share Document