On the Dual Nature of Transparency and Reliability: Rethinking Factors that Shape Trust in Automation

Author(s):  
Elizabeth Kaltenbach1 ◽  
Igor Dolgov

Prior literature has found that increasing system reliability and transparency can positively impact operators’ trust of automated systems; however, these factors are typically confounded. In the present study, we separated these factors by manipulating different stages of automation. Participants engaged in a simulated coffee manufacturing task using an interface with differing levels of reliability (65% or 95%) and transparency (one line or multiple lines of system display). The Human Computer Trust Scale (HCTS) and the Trust in Automated Systems Scale (TAS) were used to measure trust. When examining scores on TAS items with a positive-valence, we novelly observed that transparency interacted with reliability, such that high transparency and low reliability negatively impacted trust in the system. Alternatively, trust was not negatively affected by poor reliability when transparency was low, due to trivial cost of corrective behaviors that compensated for poor reliability and lack of system history understanding by the operators.

Author(s):  
Igor Dolgov ◽  
Elizabeth K. Kaltenbach

There are numerous ways to measure trust in automation and each has its advantages and disadvantages. The current experiment evaluated and compared the trust in automated systems scale (TASS) and the human-computer trust scale (HCTS). Both the HCTS and TASS showed high internal consistency. While participants’ scores on the HCTS and TASS were highly correlated, the strength of the relationship was stronger between the positive valence items of the TASS and HCTS than between the negative valence items of the TASS and HCTS. Additionally, principal components analyses showed that the TASS had two underlying factors whereas the HCTS had four. Thus, while these trust in automation survey instruments are similar, they are also fundamentally different.


Author(s):  
Kristopher Korbelak ◽  
Jeffrey Dressel ◽  
Donald Tweedie ◽  
Whitney Wilson ◽  
Simone Erchov ◽  
...  

Automated systems are not only commonplace but a necessity to complete highly specialized tasks in many operational environments. Problems arise, however, when the automation is used injudiciously. Trust is known to influence how workers use and rely on automated systems, especially when the operational environment poses a great amount of complexity for the user. The environment in which most Transportation Security Administration (TSA) workers operate is characterized by complexity that often demands the use of automation to complete required tasks. The TSA aims to better understand the influence of trust in automation on operational performance to better support its mission and workforce. This paper will discuss the methods, findings, and practical implications gleaned from an examination of the role trust plays on human-automation interactions in the operational environment at TSA.


Author(s):  
Chenlan Wang ◽  
Chongjie Zhang ◽  
X. Jessie Yang

Research shows that over repeated interactions with automation, human operators are able to learn how reliable the automation is and update their trust in automation. The goal of the present study is to investigate if this learning and inference process approximately follow the principle of Bayesian probabilistic inference. First, we applied Bayesian inference to estimate human operators’ perceived system reliability and found high correlations between the Bayesian estimates and the perceived reliability for the majority of the participants. We then correlated the Bayesian estimates with human operators’ reported trust and found moderate correlations for a large portion of the participants. Our results suggest that human operators’ learning and inference process for automation reliability can be approximated by Bayesian inference.


Author(s):  
Scott Mishler ◽  
Jing Chen ◽  
Edin Sabic ◽  
Bin Hu ◽  
Ninghui Li ◽  
...  

Human trust in automation is widely studied because the level of trust influences the effectiveness of the system (Muir, 1994). It is vital to examine the role that the people play and how they interact with the system (Hoff & Bashir, 2015). In the decision-making literature, an interesting phenomenon is the description-experience gap, with a typical finding that experience-based choices underweight small probabilities, whereas description-based choices overweight small probabilities (Hertwig, Barron, Weber, & Erev, 2004; Hertwig & Erev, 2009; Jessup, Bishara, & Busemeyer, 2008). We applied this description-experience gap concept to the study of human-automation interaction and had Amazon Mechanical Turk workers evaluate emails as legitimate or phishing. An anti-phishing warning system provided recommendations to the user with a reliability level of 60%, 70%, 80%, or 90%. Additionally, the way in which reliability information was conveyed was manipulated with two factors: (1) whether the reliability level of the system was stated explicitly (i.e., description); (2) whether feedback was provided after the user made each decision (i.e., experience). Our results showed that as the reliability of the warning system increased, so did decision accuracy, agreement rate, self-reported trust, and perceived system reliability, consistent with prior research (Lee & See, 2004; Rice, 2009; Sanchez, Fisk, & Rogers, 2004). The increase in performance and trust with the increase in reliability indicates that participants were paying attention to and using the automation to make decisions. Feedback was also highly influential in performance and establishing trust, but description only affected self-reported trust. The effect of feedback strengthened at the higher levels of reliability, showing that individuals benefited the most from feedback when the automated warning system was more reliable. Additionally, unlike prior studies that manipulated description and experience/feedback separately (Hertwig, 2012), we varied description and feedback conditions systematically and discovered an interaction between the two factors. Our results show that feedback is more helpful in situations that do not provide an explicit description of the system reliability, compared to those who do. An implication of the current results for system design is that feedback should be provided whenever possible. This recommendation is based on the finding that providing feedback benefited both users’ performance and trust in the system, and on the hope that the systems in use are mostly of high reliability (e.g., > .80). A note for researchers in the field of human trust in automation is that, if only subjective measures of trust are used in a study, providing description of the system reliability will likely cause an inflation in the trust measures.


Author(s):  
Benjamin Noah ◽  
Arathi Sethumadhavan

Human trust in automation has been studied extensively within safety critical domains (military, aviation, process control, etc.) because harmful consequences are associated with the improper calibration of trust in automated systems in these domains (Parasuraman & Riley, 1997). As such, researchers have worked to identify important factors which help humans build trust in such systems (Hoff & Bashir, 2015). With the explosion of AI in consumer technologies, it is becoming equally critical to understand how humans interact with everyday devices. This study investigated how factors that have been identified to impact trust in automation in safety critical domains influence the trust and use of popular digital assistants (Siri, Cortana, Bixby or Google Now). We conducted an online survey with 278 regular users of digital assistants across three generations (GenX, GenY, and GenZ). The results demonstrate that, even after controlling for dispositional factors (i.e., individual characteristics such as age, culture, gender), GenZ exhibited higher trust in digital assistants than GenX. More interestingly, linear regression analyses revealed a different set of predictors of trust for each generation. Results from this survey have implications for the design of digital assistants.


Author(s):  
James P. Bliss ◽  
Sonya M. Jeans ◽  
Heidi J. Prioux

Researchers and designers have investigated ways to mitigate the consequences of alarm mistrust, including using redundant information sources to ensure response consistency. Research concerning the benefit of this practice has not considered the operator's prior alarm system knowledge, or the division of attention among multiple tasks. We investigated the influence of real-time individual alarm validity information and prior alarm system reliability information on primary and alarm task responses. One hundred undergraduate students performed a continuous compensatory tracking task while responding to microcomputer-based alarms. Dependent measures included alarm response frequency, speed, accuracy, and appropriateness, and primary task tracking error. Results indicated that participants with real-time alarm validity information responded less frequently, but more appropriately than those without such information. Participants with prior access to alarm system reliability information responded more frequently than those without such information. The results are discussed as they apply to prior literature and alarm system design.


2017 ◽  
Vol 27 (1) ◽  
pp. 52-68 ◽  
Author(s):  
Olivier Berthod ◽  
Gordon Müller-Seitz

A brief failure of one item on the display of the information system (IS) on Flight AF 447 wrought havoc in the coordination between the pilots and the aircraft, leading to the loss of all 228 lives on board. In this essay, we ask the following question: How can the very instruments supposed to ensure our safety and make organizations more reliable lead a team to destruction? We propose that the imbrication of material and human agencies in such highly automated systems drives an attitude of “mindful indifference” (i.e., the capacity for experienced operators to distinguish problems that could turn into critical ones from problems that can be tolerated on account of the overall system reliability). An abrupt change in this imbrication provoked emotional distress and focused the pilots’ attention toward the machine, instead of triggering an organizational process of sensemaking. We highlight the role of leadership in such situations.


Sign in / Sign up

Export Citation Format

Share Document