scholarly journals Effects of User Interfaces on Take-Over Performance: A Review of the Empirical Evidence

Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.

Author(s):  
Hanna Poranen ◽  
Giancarlo Marafioti ◽  
Gorm Johansen ◽  
Eivind Sæter

User interface (UI) is a platform that enables interaction between a human and a machine, a visual part of an information device, such as a computer or software, which user interacts with. A good user interface design makes operating a machine efficient, safe and user friendly in a way that gives the desired result. This paper describes a set of guidelines defined for marine autonomous operations where many actors, devices and sensors are interacting. The UI should manage and present in a user-friendly manner a large amount of data, ensuring situation awareness for the operator/user. The design guidelines of the user interface consist of both a work process part and a content part, also called user experience design (UX). The work process consists of four sections: manage, plan, operate and evaluate, while the content part focuses on how to show the information. Both parts will be detailed and discussed and can be taken as a reference for designing user interfaces in particular for marine autonomous operations.


2018 ◽  
Vol 2 (4) ◽  
pp. 71 ◽  
Author(s):  
Patrick Lindemann ◽  
Tae-Young Lee ◽  
Gerhard Rigoll

Broad access to automated cars (ACs) that can reliably and unconditionally drive in all environments is still some years away. Urban areas pose a particular challenge to ACs, since even perfectly reliable systems may be forced to execute sudden reactive driving maneuvers in hard-to-predict hazardous situations. This may negatively surprise the driver, possibly causing discomfort, anxiety or loss of trust, which might be a risk for the acceptance of the technology in general. To counter this, we suggest an explanatory windshield display interface with augmented reality (AR) elements to support driver situation awareness (SA). It provides the driver with information about the car’s perceptive capabilities and driving decisions. We created a prototype in a human-centered approach and implemented the interface in a mixed-reality driving simulation. We conducted a user study to assess its influence on driver SA. We collected objective SA scores and self-ratings, both of which yielded a significant improvement with our interface in good (medium effect) and in bad (large effect) visibility conditions. We conclude that explanatory AR interfaces could be a viable measure against unwarranted driver discomfort and loss of trust in critical urban situations by elevating SA.


Author(s):  
Xiaomei Tan ◽  
Yiqi Zhang

Conditionally automated vehicles require the out-of-the-loop driver to intervene when the system is unable to handle forthcoming situations, such as freeway exiting. The takeover request (ToR) for exiting a freeway can be scheduled in advance. Upon a ToR, the driver needs to gain situation awareness (SA) and resume manual control. This study examined how the ToR lead time affects driver SA for resuming control and when to send the ToR is most appropriate for freeway exiting. A web-based, supervised experiment was conducted with 31 participants. Each participant experienced 12 levels of ToR lead time (6, 8, 10, 12, 14, 16, 18, 20, 25, 30, 45, and 60 s). The results showed positive effects of longer ToR lead times (16–60 s) on driver SA for resuming control to exit from freeways in comparison to shorter ToR lead times (6–14 s), and the effects level off at 16–30 s.


Information ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 21
Author(s):  
Johannes Ossig ◽  
Stephanie Cramer ◽  
Klaus Bengler

In the human-centered research on automated driving, it is common practice to describe the vehicle behavior by means of terms and definitions related to non-automated driving. However, some of these definitions are not suitable for this purpose. This paper presents an ontology for automated vehicle behavior which takes into account a large number of existing definitions and previous studies. This ontology is characterized by an applicability for various levels of automated driving and a clear conceptual distinction between characteristics of vehicle occupants, the automation system, and the conventional characteristics of a vehicle. In this context, the terms ‘driveability’, ‘driving behavior’, ‘driving experience’, and especially ‘driving style’, which are commonly associated with non-automated driving, play an important role. In order to clarify the relationships between these terms, the ontology is integrated into a driver-vehicle system. Finally, the ontology developed here is used to derive recommendations for the future design of automated driving styles and in general for further human-centered research on automated driving.


2021 ◽  
Vol 11 (16) ◽  
pp. 7197
Author(s):  
Yourui Tong ◽  
Bochen Jia ◽  
Shan Bao

Warning pedestrians of oncoming vehicles is critical to improving pedestrian safety. Due to the limitations of a pedestrian’s carrying capacity, it is crucial to find an effective solution to provide warnings to pedestrians in real-time. Limited numbers of studies focused on warning pedestrians of oncoming vehicles. Few studies focused on developing visual warning systems for pedestrians through wearable devices. In this study, various real-time projection algorithms were developed to provide accurate warning information in a timely way. A pilot study was completed to test the algorithm and the user interface design. The projection algorithms can update the warning information and correctly fit it into an easy-to-understand interface. By using this system, timely warning information can be sent to those pedestrians who have lower situational awareness or obstructed view to protect them from potential collisions. It can work well when the sightline is blocked by obstructions.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


Author(s):  
HyunJoo Park ◽  
HyunJae Park ◽  
Sang-Hwan Kim

In conditional automated driving, drivers may be required starting manual driving from automated driving mode after take-over request (TOR). The objective of the study was to investigate different TOR features for drivers to engage in manual driving effectively in terms of reaction time, preference, and situation awareness (SA). Five TOR features, including four features using countdown, were designed and evaluated, consisted of combinations of different modalities and codes. Results revealed the use of non-verbal sound cue (beep) yielded shorter reaction time while participants preferred verbal sound cue (speech). Drivers' SA was not different for TOR features, but the level of SA was affected by different aspects of SA. The results may provide insights into designing multimodal TOR along with drivers' behavior during take-over tasks.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-29
Author(s):  
Arthur Sluÿters ◽  
Jean Vanderdonckt ◽  
Radu-Daniel Vatavu

Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.


Author(s):  
Henry Larkin

Purpose – The purpose of this paper is to investigate the feasibility of creating a declarative user interface language suitable for rapid prototyping of mobile and Web apps. Moreover, this paper presents a new framework for creating responsive user interfaces using JavaScript. Design/methodology/approach – Very little existing research has been done in JavaScript-specific declarative user interface (UI) languages for mobile Web apps. This paper introduces a new framework, along with several case studies that create modern responsive designs programmatically. Findings – The fully implemented prototype verifies the feasibility of a JavaScript-based declarative user interface library. This paper demonstrates that existing solutions are unwieldy and cumbersome to dynamically create and adjust nodes within a visual syntax of program code. Originality/value – This paper presents the Guix.js platform, a declarative UI library for rapid development of Web-based mobile interfaces in JavaScript.


Sign in / Sign up

Export Citation Format

Share Document