scholarly journals Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality

2014 ◽  
Vol 18 (1) ◽  
pp. 121-128 ◽  
Author(s):  
Zahra M Aghajan ◽  
Lavanya Acharya ◽  
Jason J Moore ◽  
Jesse D Cushman ◽  
Cliff Vuong ◽  
...  
2013 ◽  
Author(s):  
Zahra Aghajan ◽  
Lavanya Acharya ◽  
Jesse Cushman ◽  
Cliff Vuong ◽  
Jason Moore ◽  
...  

Dorsal Hippocampal neurons provide an allocentric map of space, characterized by three key properties. First, their firing is spatially selective, termed a rate code. Second, as animals traverse through place fields, neurons sustain elevated firing rates for long periods, however this has received little attention. Third the theta-phase of spikes within this sustained activity varies with animal's location, termed phase-precession or a temporal code. The precise relationship between these properties and the mechanisms governing them are not understood, although distal visual cues (DVC) are thought to be sufficient to reliably elicit them. Hence, we measured rat CA1 neurons' activity during random foraging in two-dimensional VR—where only DVC provide consistent allocentric location information— and compared it with their activity in real world (RW). Surprisingly, we found little spatial selectivity in VR. This is in sharp contrast to robust spatial selectivity commonly seen in one-dimensional RW and VR, or two-dimensional RW. Despite this, neurons in VR generated approximately two-second long phase precessing spike sequences, termed “hippocampal motifs”. Motifs, and “Motif-fields”, an aggregation of all motifs of a neuron, had qualitatively similar properties including theta-scale temporal coding in RW and VR, but the motifs were far less spatially localized in VR. These results suggest that intrinsic, network mechanisms generate temporally coded hippocampal motifs, which can be dissociated from their spatial selectivity. Further, DVC alone are insufficient to localize motifs spatially to generate a robust rate code.


2020 ◽  
Author(s):  
Mohammad H Babini ◽  
Vladimir V Kulish ◽  
Hamidreza Namazi

BACKGROUND Education and learning are the most important goals of all universities. For this purpose, lecturers use various tools to grab the attention of students and improve their learning ability. Virtual reality refers to the subjective sensory experience of being immersed in a computer-mediated world, and has recently been implemented in learning environments. OBJECTIVE The aim of this study was to analyze the effect of a virtual reality condition on students’ learning ability and physiological state. METHODS Students were shown 6 sets of videos (3 videos in a two-dimensional condition and 3 videos in a three-dimensional condition), and their learning ability was analyzed based on a subsequent questionnaire. In addition, we analyzed the reaction of the brain and facial muscles of the students during both the two-dimensional and three-dimensional viewing conditions and used fractal theory to investigate their attention to the videos. RESULTS The learning ability of students was increased in the three-dimensional condition compared to that in the two-dimensional condition. In addition, analysis of physiological signals showed that students paid more attention to the three-dimensional videos. CONCLUSIONS A virtual reality condition has a greater effect on enhancing the learning ability of students. The analytical approach of this study can be further extended to evaluate other physiological signals of subjects in a virtual reality condition.


Author(s):  
Yuwei Li ◽  
David Donghyun Kim ◽  
Brian Anthony

Abstract We present HapticWall, an encountered-type, motor actuated vertical two-dimensional system that enables both small and large scale physical interactions in virtual reality. HapticWall consists of a motor-actuated vertical two-dimensional gantry system that powers the physical proxy for the virtual counterpart. The physical proxy, combined with the HapticWall system, can be used to provide both small and large scale haptic feedbacks for virtual reality in the vertical space. Haptic Wall is capable of providing wall-like haptic feedback and interactions in the vertical space. We created two virtual reality applications to demonstrate the application of the HapticWall system. Preliminary user feedback was collected to evaluate the performance and the limitations of the HapticWall system. The results of our study are presented in this paper. The outcome of this research will provide better understanding of multi-scale haptic interfaces in the vertical space for virtual reality and guide the future development of the HapticWall system.


Author(s):  
I Made Ardwi Pradnyana ◽  
I Ketut Resika Arthana ◽  
I Gusti Bagus Hari Sastrawan

Submission of learning materials with animal themes, especially wild animals to early childhood becomes a challenge for teachers. Two-dimensional displacement media in the form of a monotonous image has the potential to decrease interest in children's learning. Bringing wild animals directly or bringing the children to the zoo requires considerable cost and time and harm. Based on these problems, the authors develop android-based applications that contain fourteen species of wild animals in 3D format that is packed with Virtual Reality (VR) technology. The authors develop applications using development research methods with the ADDIE model. The developed VR application is capable of displaying wild animal animations complete with the sounds and environment of the habitat, as well as the description narrative features and food that can be viewed in 3D and VR modes. The test results showed that the application received a positive response from users, especially children in TK Negeri Pembina Singaraja. The average percentage for the user response test is 88.50%, which means it is very good where children can know the types of wild animals, the movements of wild animals, the sounds of wild animals, the habitats of wild animals and can use them easily. 


2009 ◽  
Vol 364 (1521) ◽  
pp. 1193-1201 ◽  
Author(s):  
John Lisman ◽  
A.D. Redish

Recordings of rat hippocampal place cells have provided information about how the hippocampus retrieves memory sequences. One line of evidence has to do with phase precession, a process organized by theta and gamma oscillations. This precession can be interpreted as the cued prediction of the sequence of upcoming positions. In support of this interpretation, experiments in two-dimensional environments and on a cue-rich linear track demonstrate that many cells represent a position ahead of the animal and that this position is the same irrespective of which direction the rat is coming from. Other lines of investigation have demonstrated that such predictive processes also occur in the non-spatial domain and that retrieval can be internally or externally cued. The mechanism of sequence retrieval and the usefulness of this retrieval to guide behaviour are discussed.


2019 ◽  
Author(s):  
Samuel T. Westreich ◽  
Maria Nattestad ◽  
Christopher Meyer

AbstractBackgroundGenome-wide association studies (GWAS) are typically visualized using a two-dimensional Manhattan plot, displaying chromosomal location of SNPs along the x-axis and the negative log-10 of their p-value on the y-axis. This traditional plot provides a broad overview of the results, but offers little opportunity for interaction or expansion of specific regions, and is unable to show additional dimensions of the dataset.ResultsWe created BigTop, a visualization framework in virtual reality (VR), designed to render a Manhattan plot in three dimensions, wrapping the graph around the user in a simulated cylindrical room. BigTop uses the z-axis to display minor allele frequency of each SNP, allowing for the identification of allelic variants of genes. BigTop also offers additional interactivity, allowing users to select any individual SNP and receive expanded information, including SNP name, exact values, and gene location, if applicable. BigTop is built in JavaScript using the React and A-Frame frameworks, and can be rendered using commercially available VR headsets or in a two-dimensional web browser such as Google Chrome. Data is read into BigTop in JSON format, and can be provided as either JSON or a tab-separated text file.ConclusionsUsing additional dimensions and interactivity options offered through VR, we provide a new, interactive, three-dimensional representation of the traditional Manhattan plot for displaying and exploring GWAS data.


Author(s):  
Shujie Deng ◽  
Gavin Wheeler ◽  
Nicolas Toussaint ◽  
Lindsay Munroe ◽  
Suryava Bhattacharya ◽  
...  

The intricate nature of congenital heart disease requires understanding of complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens and therefore display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multi-planar reformatting, flow visualisation, and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation, and multi-user interactions. The available features were evaluated by imaging and non-imaging clinicians, showing that the virtual reality system can help improve understanding and communication of the three-dimensional echocardiography imaging and potentially benefit congenital heart disease treatment.


2018 ◽  
Author(s):  
Hannah Haberkern ◽  
Melanie A. Basnak ◽  
Biafra Ahanonu ◽  
David Schauder ◽  
Jeremy D. Cohen ◽  
...  

AbstractA navigating animal’s sensory experience is shaped not just by its surroundings, but by its movements within them, which in turn are influenced by its past experiences. Studying the intertwined roles of sensation, experience and directed action in navigation has been made easier by the development of virtual reality (VR) environments for head-fixed animals, which allow for quantitative measurements of behavior in well-controlled sensory conditions. VR has long featured in studies of Drosophila melanogaster, but these experiments have typically relied on one-dimensional (1D) VR, effectively allowing the fly to change only its heading in a visual scene, and not its position. Here we explore how flies navigate in a two-dimensional (2D) visual VR environment that more closely resembles their experience during free behavior. We show that flies’ interaction with landmarks in 2D environments cannot be automatically derived from their behavior in simpler 1D environments. Using a novel paradigm, we then demonstrate that flies in 2D VR adapt their behavior in a visual environment in response to optogenetically delivered appetitive and aversive stimuli. Much like free-walking flies after encounters with food, head-fixed flies respond to optogenetic activation of sugar-sensing neurons by initiating a local search behavior. Finally, by pairing optogenetic activation of heat-sensing cells to the flies’ presence near visual landmarks of specific shapes, we elicit selective learned avoidance of landmarks associated with aversive “virtual heat”. These head-fixed paradigms set the stage for an interrogation of fly brain circuitry underlying flexible navigation in complex visual environments.


Author(s):  
Anang Pramono ◽  
Martin Dwiky Setiawan

The concept of education for children is important. The aspects that must be considered are methods and learning media. In this research innovative and alternative learning media are made to understand fruits for children with Augmented Reality (AR). Augmented Reality (AR) in principle is a technology that is able to combine two-dimensional or three-dimensional virtual objects into a real environment and then project it. This learning media combines picture cards and virtual reality. Markers contained on picture cards will be captured by the mobile device camera, processed and will 3D animated pieces appear on the mobile screen in realtime. By using the concept of combining real world, real images on cards and virtual, applications can stimulate imagination and sense of desire in children and motivation to learn more and more. 3D fruit estimation created using the 3D Blender application and the Augmented Rea process lity is made using Unity and the Vuforia SDK library. The application of fruit recognition has been applied to several child respondents and has been tested on several types and brands of Android-based mobile phones. Based on research trials, 86% of 30 respondents stated that the application which was developed very effectively as a medium for the introduction of fruits.


Sign in / Sign up

Export Citation Format

Share Document