scholarly journals Pins & Needles: Towards Limb Disownership in Augmented Reality

2018 ◽  
Author(s):  
Oliver A Kannape ◽  
Ethan JT Smith ◽  
Peter Moseley ◽  
Mark P Roy ◽  
Bigna Lenggenhager

ABSTRACTThe seemingly stable construct of our bodily self depends on the continued, successful integration of multisensory feedback about our body, rather than its purely physical composition. Accordingly, pathological disruption of such neural processing is linked to striking alterations of the bodily self, ranging from limb misidentification to disownership, and even the desire to amputate a healthy limb. While previous embodiment research has relied on experimental setups using supernumerary limbs in variants of the Rubber Hand Illusion, we here used Augmented Reality to directly manipulate the feeling of ownership for one’s own, biological limb. Using a Head-Mounted Display, participants received visual feedback about their own arm, from an embodied first-person perspective. In a series of three studies, in independent cohorts, we altered embodiment by providing visuotactile feedback that could be synchronous (control condition) or asynchronous (400ms delay, Real Hand Illusion). During the illusion, participants reported a significant decrease in ownership of their own limb, along with a lowered sense of agency. Supporting the right-parietal body network, we found an increased illusion strength for the left upper limb as well as a modulation of the feeling of ownership during anodal transcranial direct current stimulation. Extending previous research, these findings demonstrate that a controlled, visuotactile conflict about one’s own limb can be used to directly and systematically modulate ownership – without a proxy. This not only corroborates the malleability of body representation but questions its permanence. These findings warrant further exploration of combined VR and neuromodulation therapies for disorders of the bodily self.

2020 ◽  
Author(s):  
Chenggui Fan ◽  
H. Henrik Ehrsson

A controversial and unresolved issue in cognitive neuroscience is whether humans can experience supernumerary limbs as part of their own body. Some previous experiments have claimed that it is possible to elicit supernumerary hand illusions based on modified versions of the rubber hand illusion. However, other studies have provided conflicting results that suggest that only one rubber hand can be perceived as one’s own. To address this issue, we developed a supernumerary hand illusion paradigm that allowed us to disambiguate ownership of individual rubber hands from simultaneous ownership of two fake hands. In our setup, the participant’s real right hand was hidden under a platform, while two identical right rubber hands were placed in parallel on top of the platform in direct view of the participant. We applied synchronous strokes to both rubber hands and the real hand (SS), synchronous strokes to one rubber hand and the real hand and asynchronous strokes to the other model hand (AS and SA) or asynchronous strokes to both fake hands in relation to the real hand (AA). Our results demonstrate that a genuine illusion of owning two rubber hands can be elicited and that such a supernumerary hand illusion can be isolated from the sense of ownership of a single rubber hand both in terms of questionnaire ratings and threat-evoked skin conductance responses (SCRs). These findings advance our knowledge about the dynamic flexibility and fundamental constraints of body representation and emphasize the importance of correlated afferent signals for causal inference in body ownership.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Doerte Kuhrt ◽  
Natalie R. St. John ◽  
Jacob L. S. Bellmund ◽  
Raphael Kaplan ◽  
Christian F. Doeller

AbstractAdvances in virtual reality (VR) technology have greatly benefited spatial navigation research. By presenting space in a controlled manner, changing aspects of the environment one at a time or manipulating the gain from different sensory inputs, the mechanisms underlying spatial behaviour can be investigated. In parallel, a growing body of evidence suggests that the processes involved in spatial navigation extend to non-spatial domains. Here, we leverage VR technology advances to test whether participants can navigate abstract knowledge. We designed a two-dimensional quantity space—presented using a head-mounted display—to test if participants can navigate abstract knowledge using a first-person perspective navigation paradigm. To investigate the effect of physical movement, we divided participants into two groups: one walking and rotating on a motion platform, the other group using a gamepad to move through the abstract space. We found that both groups learned to navigate using a first-person perspective and formed accurate representations of the abstract space. Interestingly, navigation in the quantity space resembled behavioural patterns observed in navigation studies using environments with natural visuospatial cues. Notably, both groups demonstrated similar patterns of learning. Taken together, these results imply that both self-movement and remote exploration can be used to learn the relational mapping between abstract stimuli.


2020 ◽  
Vol 10 (1) ◽  
pp. 55
Author(s):  
Alexey Tumialis ◽  
Alexey Smirnov ◽  
Kirill Fadeev ◽  
Tatiana Alikovskaia ◽  
Pavel Khoroshikh ◽  
...  

The perspective of perceiving one’s action affects its speed and accuracy. In the present study, we investigated the change in accuracy and kinematics when subjects throw darts from the first-person perspective and the third-person perspective with varying angles of view. To model the third-person perspective, subjects were looking at themselves as well as the scene through the virtual reality head-mounted display (VR HMD). The scene was supplied by a video feed from the camera located to the up and 0, 20 and 40 degrees to the right behind the subjects. The 28 subjects wore a motion capture suit to register their right hand displacement, velocity and acceleration, as well as torso rotation during the dart throws. The results indicated that mean accuracy shifted in opposite direction with the changes of camera location in vertical axis and in congruent direction in horizontal axis. Kinematic data revealed a smaller angle of torso rotation to the left in all third-person perspective conditions before and during the throw. The amplitude, speed and acceleration in third-person condition were lower compared to the first-person view condition, before the peak velocity of the hand in the direction toward the target and after the peak velocity in lowering the hand. Moreover, the hand movement angle was smaller in the third-person perspective conditions with 20 and 40 angle of view, compared with the first-person perspective condition just preceding the time of peak velocity, and the difference between conditions predicted the changes in mean accuracy of the throws. Thus, the results of this study revealed that subject’s localization contributed to the transformation of the motor program.


10.2196/18888 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e18888
Author(s):  
Susanne M van der Veen ◽  
Alexander Stamenkovic ◽  
Megan E Applegate ◽  
Samuel T Leitkam ◽  
Christopher R France ◽  
...  

Background Visual representation of oneself is likely to affect movement patterns. Prior work in virtual dodgeball showed greater excursion of the ankles, knees, hips, spine, and shoulder occurs when presented in the first-person perspective compared to the third-person perspective. However, the mode of presentation differed between the two conditions such that a head-mounted display was used to present the avatar in the first-person perspective, but a 3D television (3DTV) display was used to present the avatar in the third-person. Thus, it is unknown whether changes in joint excursions are driven by the visual display (head-mounted display versus 3DTV) or avatar perspective during virtual gameplay. Objective This study aimed to determine the influence of avatar perspective on joint excursion in healthy individuals playing virtual dodgeball using a head-mounted display. Methods Participants (n=29, 15 male, 14 female) performed full-body movements to intercept launched virtual targets presented in a game of virtual dodgeball using a head-mounted display. Two avatar perspectives were compared during each session of gameplay. A first-person perspective was created by placing the center of the displayed content at the bridge of the participant’s nose, while a third-person perspective was created by placing the camera view at the participant’s eye level but set 1 m behind the participant avatar. During gameplay, virtual dodgeballs were launched at a consistent velocity of 30 m/s to one of nine locations determined by a combination of three different intended impact heights and three different directions (left, center, or right) based on subject anthropometrics. Joint kinematics and angular excursions of the ankles, knees, hips, lumbar spine, elbows, and shoulders were assessed. Results The change in joint excursions from initial posture to the interception of the virtual dodgeball were averaged across trials. Separate repeated-measures ANOVAs revealed greater excursions of the ankle (P=.010), knee (P=.001), hip (P=.0014), spine (P=.001), and shoulder (P=.001) joints while playing virtual dodgeball in the first versus third-person perspective. Aligning with the expectations, there was a significant effect of impact height on joint excursions. Conclusions As clinicians develop treatment strategies in virtual reality to shape motion in orthopedic populations, it is important to be aware that changes in avatar perspective can significantly influence motor behavior. These data are important for the development of virtual reality assessment and treatment tools that are becoming increasingly practical for home and clinic-based rehabilitation.


Author(s):  
Roland Pfister ◽  
Annika L. Klaffehn ◽  
Andreas Kalckert ◽  
Wilfried Kunde ◽  
David Dignath

AbstractBody representations are readily expanded based on sensorimotor experience. A dynamic view of body representations, however, holds that these representations cannot only be expanded but that they can also be narrowed down by disembodying elements of the body representation that are no longer warranted. Here we induced illusory ownership in terms of a moving rubber hand illusion and studied the maintenance of this illusion across different conditions. We observed ownership experience to decrease gradually unless participants continued to receive confirmatory multisensory input. Moreover, a single instance of multisensory mismatch – a hammer striking the rubber hand but not the real hand – triggered substantial and immediate disembodiment. Together, these findings support and extend previous theoretical efforts to model body representations through basic mechanisms of multisensory integration. They further support an updating model suggesting that embodied entities fade from the body representation if they are not refreshed continuously.


2019 ◽  
pp. 24-65
Author(s):  
R. Jay Wallace

This chapter looks at the issue of the normative significance of moral requirements in the first-person perspective of deliberation. Moral conclusions are customarily treated as considerations that matter within an agent's practical decision-making. That a course of action would be impermissible, for instance, or morally the right thing to do, are conclusions that appear to have direct relevance for practical deliberation, which agents who are reasoning correctly will take appropriately into account in planning their future activities. The philosophical problem in this area is often understood to be the problem of making sense of the reason-giving force of morality. That is, an account of moral rightness or permissibility should shed light on the standing of these considerations as reasons for action, which count for and against actions in the first-person perspective of agency. However, this conventional understanding seriously underdescribes the challenge that faces a philosophical account of morality.


2012 ◽  
Vol 25 (0) ◽  
pp. 197
Author(s):  
Adria E. N. Hoover ◽  
Laurence R. Harris

We have previously shown that people are more sensitive at detecting asynchrony between a self-generated movement and delayed visual feedback when the perspective of the movement matches the ‘natural view’ suggesting an internal, visual, canonical body representation (Hoover and Harris, 2011). Is there a similar variation in sensitivity for parts of the body that cannot be seen in a first-person perspective? To test this, participants made movements with their hands and head (viewing their face or the back of their head) under four viewing conditions: (1) the natural (or direct) view, (2) mirror-reversed, (3) inverted, and (4) inverted and mirror-reversed. Participants indicated which of two periods (one with a minimum delay, the other with an added delay of 33–264 ms) was delayed and their sensitivity to delay was calculated. A significant linear trend was found when comparing sensitivity to detect cross-modal asynchrony in the ‘natural’ or ‘direct’ view condition across body parts; where sensitivity was greatest when viewing body parts seen most often (hands), intermediary for viewing body parts that are seen only indirectly (moving head while viewing face), and least for viewing body parts that are never seen at all (moving head while viewing back of the head). Further, dependency on viewpoint was most evident for body parts that are seen most often or indirectly, but not for body parts that are never seen. Results are discussed in terms of a visual representation of the body.


2013 ◽  
Vol 10 (85) ◽  
pp. 20130300 ◽  
Author(s):  
Joan Llobera ◽  
M. V. Sanchez-Vives ◽  
Mel Slater

In the rubber hand illusion, tactile stimulation seen on a rubber hand, that is synchronous with tactile stimulation felt on the hidden real hand, can lead to an illusion of ownership over the rubber hand. This illusion has been shown to produce a temperature decrease in the hidden hand, suggesting that such illusory ownership produces disownership of the real hand. Here, we apply immersive virtual reality (VR) to experimentally investigate this with respect to sensitivity to temperature change. Forty participants experienced immersion in a VR with a virtual body (VB) seen from a first-person perspective. For half the participants, the VB was consistent in posture and movement with their own body, and in the other half, there was inconsistency. Temperature sensitivity on the palm of the hand was measured before and during the virtual experience. The results show that temperature sensitivity decreased in the consistent compared with the inconsistent condition. Moreover, the change in sensitivity was significantly correlated with the subjective illusion of virtual arm ownership but modulated by the illusion of ownership over the full VB. This suggests that a full body ownership illusion results in a unification of the virtual and real bodies into one overall entity—with proprioception and tactile sensations on the real body integrated with the visual presence of the VB. The results are interpreted in the framework of a ‘body matrix’ recently introduced into the literature.


2004 ◽  
Vol 16 (6) ◽  
pp. 988-999 ◽  
Author(s):  
Perrine Ruby ◽  
Jean Decety

Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/ other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy.


2020 ◽  
Vol 8 (3) ◽  
pp. 137-146
Author(s):  
John V. Pavlik

Drones are shaping journalism in a variety of ways including in the production of immersive news content. This article identifies, describes and analyzes, or maps out, four areas in which drones are impacting immersive news content. These include: 1) enabling the possibility of providing aerial perspective for first-person perspective flight-based immersive journalism experiences; 2) providing geo-tagged audio and video for flight-based immersive news content; 3) providing the capacity for both volumetric and 360 video capture; and 4) generating novel content types or content based on data acquired from a broad range of sensors beyond the standard visible light captured via video cameras; these may be a central generator of unique experiential media content beyond visual flight-based news content.


Sign in / Sign up

Export Citation Format

Share Document