Collaborative automated systems: Older adults' mental model acquisition and trust in automation

2009 ◽  
Author(s):  
Katherine E. Olson ◽  
Arthur D. Fisk ◽  
Wendy A. Rogers
Author(s):  
Elizabeth Kaltenbach1 ◽  
Igor Dolgov

Prior literature has found that increasing system reliability and transparency can positively impact operators’ trust of automated systems; however, these factors are typically confounded. In the present study, we separated these factors by manipulating different stages of automation. Participants engaged in a simulated coffee manufacturing task using an interface with differing levels of reliability (65% or 95%) and transparency (one line or multiple lines of system display). The Human Computer Trust Scale (HCTS) and the Trust in Automated Systems Scale (TAS) were used to measure trust. When examining scores on TAS items with a positive-valence, we novelly observed that transparency interacted with reliability, such that high transparency and low reliability negatively impacted trust in the system. Alternatively, trust was not negatively affected by poor reliability when transparency was low, due to trivial cost of corrective behaviors that compensated for poor reliability and lack of system history understanding by the operators.


Author(s):  
Kristopher Korbelak ◽  
Jeffrey Dressel ◽  
Donald Tweedie ◽  
Whitney Wilson ◽  
Simone Erchov ◽  
...  

Automated systems are not only commonplace but a necessity to complete highly specialized tasks in many operational environments. Problems arise, however, when the automation is used injudiciously. Trust is known to influence how workers use and rely on automated systems, especially when the operational environment poses a great amount of complexity for the user. The environment in which most Transportation Security Administration (TSA) workers operate is characterized by complexity that often demands the use of automation to complete required tasks. The TSA aims to better understand the influence of trust in automation on operational performance to better support its mission and workforce. This paper will discuss the methods, findings, and practical implications gleaned from an examination of the role trust plays on human-automation interactions in the operational environment at TSA.


2004 ◽  
Vol 30 (2) ◽  
pp. 217-224 ◽  
Author(s):  
D. Kristen Gilbert ◽  
Wendy A. Rogers ◽  
Mary E. Samuelson

Author(s):  
Benjamin Noah ◽  
Arathi Sethumadhavan

Human trust in automation has been studied extensively within safety critical domains (military, aviation, process control, etc.) because harmful consequences are associated with the improper calibration of trust in automated systems in these domains (Parasuraman & Riley, 1997). As such, researchers have worked to identify important factors which help humans build trust in such systems (Hoff & Bashir, 2015). With the explosion of AI in consumer technologies, it is becoming equally critical to understand how humans interact with everyday devices. This study investigated how factors that have been identified to impact trust in automation in safety critical domains influence the trust and use of popular digital assistants (Siri, Cortana, Bixby or Google Now). We conducted an online survey with 278 regular users of digital assistants across three generations (GenX, GenY, and GenZ). The results demonstrate that, even after controlling for dispositional factors (i.e., individual characteristics such as age, culture, gender), GenZ exhibited higher trust in digital assistants than GenX. More interestingly, linear regression analyses revealed a different set of predictors of trust for each generation. Results from this survey have implications for the design of digital assistants.


Author(s):  
Jieun Lee ◽  
Yusuke Yamani ◽  
Makoto Itoh

Automated technologies have brought a number of benefits to professional domains, expanding the area in which humans can perform optimally in complex work environments. Human–automation trust has become an important aspect when designing acceptable automated systems considering general users who have no comprehensive knowledge of the systems. Muir and Moray (1996) proposed a model of human–machine trust incorporating predictability, dependability, and faith as predictors of overall trust in machines. Though Muir and Moray (1996) predicted that trust in machines grows from predictability, then dependability, and finally faith, their results suggested the opposite. This study will reexamine their theoretical framework and test which of the three dimensions governs initial trust in automation. Participants will be trained to operate a simulated pasteurization plant, as in Muir and Moray (1996), and they will be asked to maximize system performance in the pasteurizing task. We hypothesized that faith governs overall trust early in the interaction with the automated system, then dependability, and finally predictability as lay automation users become more familiar with the system. We attempt to replicate the results of Muir and Moray (1996) and argue that their model should be revised for trust development for general automation users.


Author(s):  
Igor Dolgov ◽  
Elizabeth K. Kaltenbach

There are numerous ways to measure trust in automation and each has its advantages and disadvantages. The current experiment evaluated and compared the trust in automated systems scale (TASS) and the human-computer trust scale (HCTS). Both the HCTS and TASS showed high internal consistency. While participants’ scores on the HCTS and TASS were highly correlated, the strength of the relationship was stronger between the positive valence items of the TASS and HCTS than between the negative valence items of the TASS and HCTS. Additionally, principal components analyses showed that the TASS had two underlying factors whereas the HCTS had four. Thus, while these trust in automation survey instruments are similar, they are also fundamentally different.


Sign in / Sign up

Export Citation Format

Share Document