scholarly journals Catch My Drift: Elevating Situation Awareness for Highly Automated Driving with an Explanatory Windshield Display User Interface

2018 ◽  
Vol 2 (4) ◽  
pp. 71 ◽  
Author(s):  
Patrick Lindemann ◽  
Tae-Young Lee ◽  
Gerhard Rigoll

Broad access to automated cars (ACs) that can reliably and unconditionally drive in all environments is still some years away. Urban areas pose a particular challenge to ACs, since even perfectly reliable systems may be forced to execute sudden reactive driving maneuvers in hard-to-predict hazardous situations. This may negatively surprise the driver, possibly causing discomfort, anxiety or loss of trust, which might be a risk for the acceptance of the technology in general. To counter this, we suggest an explanatory windshield display interface with augmented reality (AR) elements to support driver situation awareness (SA). It provides the driver with information about the car’s perceptive capabilities and driving decisions. We created a prototype in a human-centered approach and implemented the interface in a mixed-reality driving simulation. We conducted a user study to assess its influence on driver SA. We collected objective SA scores and self-ratings, both of which yielded a significant improvement with our interface in good (medium effect) and in bad (large effect) visibility conditions. We conclude that explanatory AR interfaces could be a viable measure against unwarranted driver discomfort and loss of trust in critical urban situations by elevating SA.

Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
Nayara de Oliveira Faria ◽  
Coleman Merenda ◽  
Richard Greatbatch ◽  
Kyle Tanous ◽  
Chihiro Suga ◽  
...  

In the present paper, we present a user study with an advanced-driver assistance system (ADAS) using augmented reality (AR) cues to highlight pedestrians and vehicles when approaching intersections of varying complexity. Our major goal is to understand the relationship between the presence and absence of AR, driver-initiated takeover rates and glance behavior when using a SAE Level 2 autonomous vehicle. Therefore, a user-study with eight participants on a medium-fidelity driving simulator was carried out. Overall, we found that AR cues can provide promising means to increase the system transparency, drivers’ situation awareness and trust in the system. Yet, we suggest that the dynamic glance allocation of attention during partially automated vehicles is still challenging for researchers as we still have much to understand and explore when AR cues become a distractor instead of an attention guider.


Author(s):  
Franck Techer ◽  
Luciano Ojeda ◽  
David Barat ◽  
Jean-Yves Marteau ◽  
Félicie Rampillon ◽  
...  

2021 ◽  
Vol 18 (1) ◽  
pp. 172988142097854
Author(s):  
Eduardo Jose Fabris ◽  
Vicenzo Abichequer Sangalli ◽  
Leonardo Pavanatto Soares ◽  
Márcio Sarroglia Pinho

Unmanned ground vehicles are usually deployed in situations, where it is too dangerous or not feasible to have an operator onboard. One challenge faced when such vehicles are teleoperated is maintaining a high situational awareness, due to aspects such as limitation of cameras, characteristics of network transmission, and the lack of other sensory information, such as sounds and vibrations. Situation awareness refers to the understanding of the information, events, and actions that will impact the execution and the objectives of the tasks at the current and near future of the operation of the vehicle. This work investigates how the simultaneous use of immersive telepresence and mixed reality could impact the situation awareness of the operator and the navigation performance. A user study was performed to compare our proposed approach with a traditional unmanned vehicle control station. Quantitative data obtained from the vehicle’s behavior and the situation awareness global assessment technique were used to analyze such impacts. Results provide evidence that our approach is relevant when the task requires a detailed observation of the surroundings, leading to higher situation awareness and navigation performance.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jordan Navarro ◽  
Otto Lappi ◽  
François Osiurak ◽  
Emma Hernout ◽  
Catherine Gabaude ◽  
...  

AbstractActive visual scanning of the scene is a key task-element in all forms of human locomotion. In the field of driving, steering (lateral control) and speed adjustments (longitudinal control) models are largely based on drivers’ visual inputs. Despite knowledge gained on gaze behaviour behind the wheel, our understanding of the sequential aspects of the gaze strategies that actively sample that input remains restricted. Here, we apply scan path analysis to investigate sequences of visual scanning in manual and highly automated simulated driving. Five stereotypical visual sequences were identified under manual driving: forward polling (i.e. far road explorations), guidance, backwards polling (i.e. near road explorations), scenery and speed monitoring scan paths. Previously undocumented backwards polling scan paths were the most frequent. Under highly automated driving backwards polling scan paths relative frequency decreased, guidance scan paths relative frequency increased, and automation supervision specific scan paths appeared. The results shed new light on the gaze patterns engaged while driving. Methodological and empirical questions for future studies are discussed.


Sign in / Sign up

Export Citation Format

Share Document