Human factors issues for automated vehicles

2012 ◽  
Vol 1 (1) ◽  
Author(s):  
Catherine Harvey
2020 ◽  
Author(s):  
Mizanur Rahman ◽  
Ankur Sarker ◽  
Haiying Shen ◽  
Mashrur Chowdhury ◽  
Kakan Dey ◽  
...  

Information-aware connected and automated vehicles (CAVs) have drawn great attention in recent years due to their potentially significant positive impacts on roadway safety and operational efficiency. In this paper, we conduct an in-depth review of three basic and key interrelated aspects of a CAV: sensing and communication technologies; human factors; and information-aware controller design. First, the different vehicular sensing and communication technologies and their protocol stacks, to provide reliable information to the information-aware CAV controller, are thoroughly discussed. Diverse human factors, such as user comfort, preferences, and reliability, to design the CAV systems for mass adaptation are also discussed. Then, the different layers of a CAV controller (route planning, driving mode execution, and driving model selection) considering human factors and information through connectivity are reviewed. In addition, the critical challenges for the sensing and communication technologies, human factors, and information-aware controller are identified to support the design of a safe and efficient CAV system while considering user acceptance and comfort. Finally, the promising future research directions of these three aspects are discussed to overcome existing challenges to realize a safe and operationally efficient CAV.


Author(s):  
Ruikun Luo ◽  
Na Du ◽  
Kevin Y. Huang ◽  
X. Jessie Yang

Human-autonomy teaming is a major emphasis in the ongoing transformation of future work space wherein human agents and autonomous agents are expected to work as a team. While the increasing complexity in algorithms empowers autonomous systems, one major concern arises from the human factors perspective: Human agents have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency could lead to the lack of trust in autonomy and sub-optimal team performance (Chen and Barnes, 2014; Endsley, 2017; Lyons and Havig, 2014; de Visser et al., 2018; Yang et al., 2017). In response to this concern, researchers have investigated ways to enhance autonomy transparency. Existing human factors research on autonomy transparency has largely concentrated on conveying automation reliability or likelihood/(un)certainty information (Beller et al., 2013; McGuirl and Sarter, 2006; Wang et al., 2009; Neyedli et al., 2011). Providing explanations of automation’s behaviors is another way to increase transparency, which leads to higher performance and trust (Dzindolet et al., 2003; Mercado et al., 2016). Specifically, in the context of automated vehicles, studies have showed that informing the drivers of the reasons for the action of automated vehicles decreased drivers’ anxiety, increased their sense of control, preference and acceptance (Koo et al., 2014, 2016; Forster et al., 2017). However, the studies mentioned above largely focused on conveying simple likelihood information or used hand-drafted explanations, with only few exceptions (e.g.(Mercado et al., 2016)). Further research is needed to examine potential design structures of transparency autonomy. In the present study, we wish to propose an option-centric explanation approach, inspired by the research on design rationale. Design rationale is an area of design science focusing on the “representation for explicitly documenting the reasoning and argumentation that make sense of a specific artifact (MacLean et al., 1991)”. The theoretical underpinning for design rationale is that for designers what is important is not just the specific artifact itself but its other possibilities – why an artifact is designed in a particular way compared to how it might otherwise be. We aim to evaluate the effectiveness of the option-centric explanation approach on trust, dependence and team performance. We conducted a human-in-the-loop experiment with 34 participants (Age: Mean = 23.7 years, SD = 2.88 years). We developed a simulated game Treasure Hunter, where participants and an intelligent assistant worked together to uncover a map for treasures. The intelligent assistant’s ability, intent and decision-making rationale was conveyed in the option-centric rationale display. The experiment used a between-subject design with an independent variable – whether the option-centric rationale explanation was provided. The participants were randomly assigned to either of the two explanation conditions. Participants’ trust to the intelligent assistant, confidence of accomplishing the experiment without the intelligent assistant, and workload for the whole session were collected, as well as their scores for each map. The results showed that by conveying the intelligent assistant’s ability, intent and decision-making rationale in the option-centric rationale display, participants had higher task performance. With the display of all the options, participants had a better understanding and overview of the system. Therefore, they could utilize the intelligent assistant more appropriately and earned a higher score. It is notable that every participant only played 10 maps during the whole session. The advantages of option-centric rationale display might be more apparent if more rounds are played in the experiment session. Although not significant at the .05 level, there seems to be a trend suggesting lower levels of workload when the rationale explanation displayed. Our study contributes to the study of human-autonomy teaming by considering the important role of explanation display. It can help human operators build appropriate trust and improve the human-autonomy team performance.


Author(s):  
Amudha V. Kamaraj ◽  
Joshua E. Domeyer ◽  
John D. Lee

One way to compensate for the limitations of automated vehicles is to use a remote operator as a fallback controller. Indeed, this has been proposed for fleet management and intermittent vehicle control. However, existing remote operation applications have demonstrated control challenges, such as latency and bandwidth, that inhibit the effectiveness of human operators. Additionally, human factors challenges arising due to the roles of multiple remote operators managing multiple vehicles further complicates these interventions. This paper uses the Systems Theoretic Process Analysis hazard analysis technique to identify system-level issues related to the remote operation of automated vehicles. Human factors challenges are identified through the lens of two control loops that link remote drivers, dispatchers, and vehicle automation. These control loops reveal familiar challenges, such as situation awareness and mental model mismatches, as well as novel challenges, such as poorly synchronized and misaligned control.


2020 ◽  
Vol 21 (1) ◽  
pp. 7-29 ◽  
Author(s):  
Ankur Sarker ◽  
Haiying Shen ◽  
Mizanur Rahman ◽  
Mashrur Chowdhury ◽  
Kakan Dey ◽  
...  

2019 ◽  
Vol 30 (2) ◽  
pp. 37-44
Author(s):  
Nebojsa Tomasevic ◽  
Tim Horberry ◽  
Brian Fildes

This study evaluated the behavioural validity of the Monash University Accident Research Centre automation driving simulator for research into the human factors issues associated with automated driving. The study involved both on-road and simulated driving. Twenty participants gave ratings of their willingness to resume control of an automated vehicle and perception of safety for a variety of situations along the drives. Each situation was individually categorised and ratings were processed. Statistical analysis of the ratings confirmed the behavioural validity of the simulator, in terms of the similarity of the on-road and simulator data.


2020 ◽  
pp. 1-5
Author(s):  
Ganesh Pai ◽  
Sarah Widrow ◽  
Jaydeep Radadiya ◽  
Cole D. Fitzpatrick ◽  
Michael Knodler ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document