scholarly journals First-Person Perspective Virtual Body Posture Influences Stress: A Virtual Reality Body Ownership Study

PLoS ONE ◽  
2016 ◽  
Vol 11 (2) ◽  
pp. e0148060 ◽  
Author(s):  
Ilias Bergström ◽  
Konstantina Kilteni ◽  
Mel Slater
Perception ◽  
2018 ◽  
Vol 47 (5) ◽  
pp. 477-491 ◽  
Author(s):  
Barbara Caola ◽  
Martina Montalti ◽  
Alessandro Zanini ◽  
Antony Leadbetter ◽  
Matteo Martini

Classically, body ownership illusions are triggered by cross-modal synchronous stimulations, and hampered by multisensory inconsistencies. Nonetheless, the boundaries of such illusions have been proven to be highly plastic. In this immersive virtual reality study, we explored whether it is possible to induce a sense of body ownership over a virtual body part during visuomotor inconsistencies, with or without the aid of concomitant visuo-tactile stimulations. From a first-person perspective, participants watched a virtual tube moving or an avatar’s arm moving, with or without concomitant synchronous visuo-tactile stimulations on their hand. Three different virtual arm/tube speeds were also investigated, while all participants kept their real arms still. The subjective reports show that synchronous visuo-tactile stimulations effectively counteract the effect of visuomotor inconsistencies, but at slow arm movements, a feeling of body ownership might be successfully induced even without concomitant multisensory correspondences. Possible therapeutical implications of these findings are discussed.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Elena Kokkinara ◽  
Konstantina Kilteni ◽  
Kristopher J. Blom ◽  
Mel Slater

10.2196/18888 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e18888
Author(s):  
Susanne M van der Veen ◽  
Alexander Stamenkovic ◽  
Megan E Applegate ◽  
Samuel T Leitkam ◽  
Christopher R France ◽  
...  

Background Visual representation of oneself is likely to affect movement patterns. Prior work in virtual dodgeball showed greater excursion of the ankles, knees, hips, spine, and shoulder occurs when presented in the first-person perspective compared to the third-person perspective. However, the mode of presentation differed between the two conditions such that a head-mounted display was used to present the avatar in the first-person perspective, but a 3D television (3DTV) display was used to present the avatar in the third-person. Thus, it is unknown whether changes in joint excursions are driven by the visual display (head-mounted display versus 3DTV) or avatar perspective during virtual gameplay. Objective This study aimed to determine the influence of avatar perspective on joint excursion in healthy individuals playing virtual dodgeball using a head-mounted display. Methods Participants (n=29, 15 male, 14 female) performed full-body movements to intercept launched virtual targets presented in a game of virtual dodgeball using a head-mounted display. Two avatar perspectives were compared during each session of gameplay. A first-person perspective was created by placing the center of the displayed content at the bridge of the participant’s nose, while a third-person perspective was created by placing the camera view at the participant’s eye level but set 1 m behind the participant avatar. During gameplay, virtual dodgeballs were launched at a consistent velocity of 30 m/s to one of nine locations determined by a combination of three different intended impact heights and three different directions (left, center, or right) based on subject anthropometrics. Joint kinematics and angular excursions of the ankles, knees, hips, lumbar spine, elbows, and shoulders were assessed. Results The change in joint excursions from initial posture to the interception of the virtual dodgeball were averaged across trials. Separate repeated-measures ANOVAs revealed greater excursions of the ankle (P=.010), knee (P=.001), hip (P=.0014), spine (P=.001), and shoulder (P=.001) joints while playing virtual dodgeball in the first versus third-person perspective. Aligning with the expectations, there was a significant effect of impact height on joint excursions. Conclusions As clinicians develop treatment strategies in virtual reality to shape motion in orthopedic populations, it is important to be aware that changes in avatar perspective can significantly influence motor behavior. These data are important for the development of virtual reality assessment and treatment tools that are becoming increasingly practical for home and clinic-based rehabilitation.


2016 ◽  
Vol 28 (11) ◽  
pp. 1760-1771 ◽  
Author(s):  
Giulia Bucchioni ◽  
Carlotta Fossataro ◽  
Andrea Cavallo ◽  
Harold Mouras ◽  
Marco Neppi-Modona ◽  
...  

Recent studies show that motor responses similar to those present in one's own pain (freezing effect) occur as a result of observation of pain in others. This finding has been interpreted as the physiological basis of empathy. Alternatively, it can represent the physiological counterpart of an embodiment phenomenon related to the sense of body ownership. We compared the empathy and the ownership hypotheses by manipulating the perspective of the observed hand model receiving pain so that it could be a first-person perspective, the one in which embodiment occurs, or a third-person perspective, the one in which we usually perceive the others. Motor-evoked potentials (MEPs) by TMS over M1 were recorded from first dorsal interosseous muscle, whereas participants observed video clips showing (a) a needle penetrating or (b) a Q-tip touching a hand model, presented either in first-person or in third-person perspective. We found that a pain-specific inhibition of MEP amplitude (a significantly greater MEP reduction in the “pain” compared with the “touch” conditions) only pertains to the first-person perspective, and it is related to the strength of the self-reported embodiment. We interpreted this corticospinal modulation according to an “affective” conception of body ownership, suggesting that the body I feel as my own is the body I care more about.


2018 ◽  
Vol 27 (4) ◽  
pp. 410-425
Author(s):  
Kata Szita ◽  
Pierre Gander ◽  
David Wallstén

Abstract Cinematic virtual reality offers 360-degree moving image experiences that engage a viewer's body as its position defines the momentary perspective over the surrounding simulated space. While a 360-degree narrative space has been demonstrated to provide highly immersive experiences, it may also affect information intake and the recollection of narrative events. The present study hypothesizes that the immersive quality of cinematic VR induces a viewer's first-person perspective in observing a narrative in contrast to a camera perspective. A first-person perspective is associated with increase in emotional engagement, sensation of presence, and a more vivid and accurate recollection of information. To determine these effects, we measured viewing experiences, memory characteristics, and recollection accuracy of participants watching an animated movie either using a VR headset or a stationary screen. The comparison revealed that VR viewers experience a higher level of presence in the displayed environment than screen viewers and that their memories of the movie are more vivid, evoke stronger emotions, and are more likely to be recalled from a first-person perspective. Yet, VR participants can recall fewer details than screen viewers. Overall, these results show that while cinematic virtual reality viewing involves more immersive and intense experiences, the 360-degree composition can negatively impact comprehension and recollection.


2021 ◽  
Vol 15 ◽  
Author(s):  
Dalila Burin ◽  
Ryuta Kawashima

We previously showed that the illusory sense of ownership and agency over a moving body in immersive virtual reality (displayed in a first-person perspective) can trigger subjective and physiological reactions on the real subject’s body and, therefore, an acute improvement of cognitive functions after a single session of high-intensity intermittent exercise performed exclusively by one’s own virtual body, similar to what happens when we actually do physical activity. As well as confirming previous results, here, we aimed at finding in the elderly an increased improvement after a longer virtual training with similar characteristics. Forty-two healthy older subjects (28 females, average age = 71.71 years) completed a parallel-group randomized controlled trial (RCT; UMIN000039843, umin.ac.jp) including an adapted version of the virtual training previously used: while sitting, participants observed the virtual body in a first-person perspective (1PP) or a third-person perspective (3PP) performing 20 min of virtual high-intensity intermittent exercise (vHIE; the avatar switched between fast and slow walking every 2 min). This was repeated twice a week for 6 weeks. During the vHIE, we measured the heart rate and administered questionnaires to evaluate illusory body ownership and agency. Before the beginning of the intervention, immediately after the first session of vHIE, and at the end of the entire intervention, we evaluated the cognitive performance at the Stroop task with online recording of the hemodynamic activity over the left dorsolateral prefrontal cortex. While we confirm previous results regarding the virtual illusion and its physiological effects, we did not find significant cognitive or neural improvement immediately after the first vHIE session. As a novelty, in the 1PP group only, we detected a significant decrease in the response time of the Stroop task in the post-intervention assessment compared to its baseline; coherently, we found an increased activation on left dorsolateral prefrontal cortex (lDLPFC) after the entire intervention. While the current results strengthen the impact of the virtual full-body illusion and its physiological consequences on the elderly as well, they might have stronger and more established body representations. Perhaps, a longer and increased exposure to those illusions is necessary to initiate the cascade of events that culminates to an improved cognitive performance.


Perception ◽  
2020 ◽  
Vol 49 (6) ◽  
pp. 693-696
Author(s):  
Marte Roel Lesur ◽  
Helena Aicher ◽  
Sylvain Delplanque ◽  
Bigna Lenggenhager

Bodily self-identification has shown to be easily altered through spatiotemporally congruent multimodal signals. While such manipulations are mostly studied through visuo-tactile or visuo-motor stimulation, here we investigated whether congruent visuo-olfactory cues might enhance illusory self-identification with an arbitrary object. Using virtual reality, healthy individuals saw a grapefruit from its supposed first-person perspective that was touched in synchrony with their own body. The touch attempted to replicate what was seen as softly squeezing the grapefruit. Crucially, when we additionally presented the smell of a grapefruit in synchrony with the squeezing, they self-identified stronger with the fruit than when they smelled strawberry.


Sign in / Sign up

Export Citation Format

Share Document