scholarly journals Integrated jerk as an indicator of affinity for artificial agent kinematics: laptop and virtual reality experiments involving index finger motion during two-digit grasping

PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9843
Author(s):  
James Hirose ◽  
Atsushi Nishikawa ◽  
Yosuke Horiba ◽  
Shigeru Inui ◽  
Todd C. Pataky

Uncanny valley research has shown that human likeness is an important consideration when designing artificial agents. It has separately been shown that artificial agents exhibiting human-like kinematics can elicit positive perceptual responses. However the kinematic characteristics underlying that perception have not been elucidated. This paper proposes kinematic jerk amplitude as a candidate metric for kinematic human likeness, and aims to determine whether a perceptual optimum exists over a range of jerk values. We created minimum-jerk two-digit grasp kinematics in a prosthetic hand model, then added different amplitudes of temporally smooth noise to yield a variety of animations involving different total jerk levels, ranging from maximally smooth to highly jerky. Subjects indicated their perceptual affinity for these animations by simultaneously viewing two different animations side-by-side, first using a laptop, then separately within a virtual reality (VR) environment. Results suggest that (a) subjects generally preferred smoother kinematics, (b) subjects exhibited a small preference for rougher-than minimum jerk kinematics in the laptop experiment, and that (c) the preference for rougher-than minimum-jerk kinematics was amplified in the VR experiment. These results suggest that non-maximally smooth kinematics may be perceptually optimal in robots and other artificial agents.

2020 ◽  
Author(s):  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Cesco Willemse ◽  
Serena Marchesi ◽  
Agnieszka Wykowska

Designing artificial agents that can closely imitate human behavior, might influence humans in perceiving them as intentional agents. Nonetheless, the factors that are crucial for an artificial agent to be perceived as an animated and anthropomorphic being still need to be addressed. In the current study, we investigated some of the factors that might affect the perception of a robot's behavior as human-like or intentional. To meet this aim, seventy-nine participants were exposed to two different behaviors of a humanoid robot under two different instructions. Before the experiment, participants' biases towards robotics as well as their personality traits were assessed. Our results suggest that participants’ sensitivity to human-likeness relies more on their expectations rather than on perceptual cues.


2017 ◽  
Author(s):  
Ruud Hortensius ◽  
Felix Hekele ◽  
Emily S. Cross

Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents.


Author(s):  
Daniel Hepperle ◽  
Christian Felix Purps ◽  
Jonas Deuchler ◽  
Matthias Wölfel

AbstractThe visual representation of human-like entities in virtual worlds is becoming a very important aspect as virtual reality becomes more and more “social”. The visual representation of a character’s resemblance to a real person and the emotional response to it, as well as the expectations raised, have been a topic of discussion for several decades and have been debated by scientists from different disciplines. But as with any new technology, the findings may need to be reevaluated and adapted to new modalities. In this context, we make two contributions which may have implications for how avatars should be represented in social virtual reality applications. First, we determine how default and customized characters of current social virtual reality platforms appear in terms of human likeness, eeriness, and likability, and whether there is a clear resemblance to a given person. It can be concluded that the investigated platforms vary strongly in their representation of avatars. Common to all is that a clear resemblance does not exist. Second, we show that the uncanny valley effect is also present in head-mounted displays, but—compared to 2D monitors—even more pronounced.


2021 ◽  
Vol 12 (1) ◽  
pp. 310-335
Author(s):  
Selmer Bringsjord ◽  
Naveen Sundar Govindarajulu ◽  
Michael Giancola

Abstract Suppose an artificial agent a adj {a}_{\text{adj}} , as time unfolds, (i) receives from multiple artificial agents (which may, in turn, themselves have received from yet other such agents…) propositional content, and (ii) must solve an ethical problem on the basis of what it has received. How should a adj {a}_{\text{adj}} adjudicate what it has received in order to produce such a solution? We consider an environment infused with logicist artificial agents a 1 , a 2 , … , a n {a}_{1},{a}_{2},\ldots ,{a}_{n} that sense and report their findings to “adjudicator” agents who must solve ethical problems. (Many if not most of these agents may be robots.) In such an environment, inconsistency is a virtual guarantee: a adj {a}_{\text{adj}} may, for instance, receive a report from a 1 {a}_{1} that proposition ϕ \phi holds, then from a 2 {a}_{2} that ¬ ϕ \neg \phi holds, and then from a 3 {a}_{3} that neither ϕ \phi nor ¬ ϕ \neg \phi should be believed, but rather ψ \psi instead, at some level of likelihood. We further assume that agents receiving such incompatible reports will nonetheless sometimes simply need, before long, to make decisions on the basis of these reports, in order to try to solve ethical problems. We provide a solution to such a quandary: AI capable of adjudicating competing reports from subsidiary agents through time, and delivering to humans a rational, ethically correct (relative to underlying ethical principles) recommendation based upon such adjudication. To illuminate our solution, we anchor it to a particular scenario.


2015 ◽  
Vol 24 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Himalaya Patel ◽  
Karl F. MacDorman

Just as physical appearance affects social influence in human communication, it may also affect the processing of advice conveyed through avatars, computer-animated characters, and other human-like interfaces. Although the most persuasive computer interfaces are often the most human-like, they have been predicted to incur the greatest risk of falling into the uncanny valley, the loss of empathy attributed to characters that appear eerily human. Previous studies compared interfaces on the left side of the uncanny valley, namely, those with low human likeness. To examine interfaces with higher human realism, a between-groups factorial experiment was conducted through the internet with 426 midwestern U.S. undergraduates. This experiment presented a hypothetical ethical dilemma followed by the advice of an authority figure. The authority was manipulated in three ways: depiction (digitally recorded or computer animated), motion quality (smooth or jerky), and advice (disclose or refrain from disclosing sensitive information). Of these, only the advice changed opinion about the ethical dilemma, even though the animated depiction was significantly eerier than the human depiction. These results indicate that compliance with an authority persists even when using an uncannily realistic computer-animated double.


2022 ◽  
Author(s):  
Ivan Bouchardet da Fonseca Grebot ◽  
Pedro Henrique Pinheiro Cintra ◽  
Emilly Fátima Ferreira de Lima ◽  
Michella Vaz de Castro ◽  
Rui de Moraes

2018 ◽  
Author(s):  
Jari Kätsyri ◽  
Beatrice de Gelder ◽  
Tapio Takala

The uncanny valley (UV) hypothesis suggests that increasingly human-like robots or virtual characters elicit more familiarity in their observers (positive affinity) with the exception of near-human characters that elicit strong feelings of eeriness (negative affinity). We studied this hypothesis in three experiments with carefully matched images of virtual faces varying from artificial to realistic. We investigated both painted and computer-generated (CG) faces to tap a broad range of human-likeness and to test whether CG faces would be particularly sensitive to the UV effect. Overall, we observed a linear relationship with a slight upward curvature between human-likeness and affinity. In other words, less realistic faces triggered greater eeriness in an accelerating manner. We also observed a weak UV effect for CG faces; however, least human-like faces elicited much more negative affinity in comparison. We conclude that although CG faces elicit a weak UV effect, this effect is not fully analogous to the original UV hypothesis. Instead, the subjective evaluation curve for face images resembles an uncanny slope more than a UV. Based on our results, we also argue that subjective affinity should be contrasted against subjective ratherthan objective measures of human-likeness when testing UV.


Sign in / Sign up

Export Citation Format

Share Document