Visual Perception of Real World Depth Map Resolution for Mixed Reality Rendering

Author(s):  
Lohit Petikam ◽  
Andrew Chalmers ◽  
Taeyhun Rhee
2021 ◽  
Author(s):  
Vladimir Sergeevich Antonov ◽  
Maxim Igorevich Sorokin ◽  
Dmitry Dmitrievich Zhdanov

The work is devoted to the study of the impact of virtual objects in virtual, augmented and mixed reality systems on the quality of human visual perception. The features of the operation of virtual, augmented and mixed reality systems, the principles of image formation of the virtual and real worlds and the problems of their combination are described. For mixed reality systems, the basic principles of building systems, their specifics and the principles of forming an image of the real world are described. The specific reasons for the formation of the conflict of visual perception in mixed reality systems are considered. There are three main causes of conflict that can be overcome in modern mixed reality systems: the position and type of the light source; the correct formation of shadows from virtual sources on the image of real-world objects; and the intensity of radiation, including the radiation diagram. A method for evaluating the correctness of the formation of illumination of virtual objects in a mixed reality system and its influence on the quality of visual perception is proposed. To evaluate the proposed methodology, a number of test scenes with correct and incorrect lighting of virtual objects have been developed. The assessment of the quality of visual perception was carried out on a test group of 24 people with experience in computer graphics systems. The quality was evaluated on a good/bad scale. Based on the results of the expert assessment, a conclusion was formed that defines the requirement for building a lighting system for virtual objects in mixed reality systems.


2002 ◽  
Vol 11 (2) ◽  
pp. 176-188 ◽  
Author(s):  
Yuichi Ohta ◽  
Yasuyuki Sugaya ◽  
Hiroki Igarashi ◽  
Toshikazu Ohtsuki ◽  
Kaito Taguchi

In mixed reality, occlusions and shadows are important to realize a natural fusion between the real and virtual worlds. In order to achieve this, it is necessary to acquire dense depth information of the real world from the observer's viewing position. The depth sensor must be attached to the see-through HMD of the observer because he/she moves around. The sensor should be small and light enough to be attached to the HMD and should be able to produce a reliable dense depth map at video rate. Unfortunately, however, no such depth sensors are available. We propose a client/server depth-sensing scheme to solve this problem. A server sensor located at a fixed position in the real world acquires the 3-D information of the world, and a client sensor attached to each observer produces the depth map from his/her viewing position using the 3-D information supplied from the server. Multiple clients can share the 3-D information of the server; we call it Share-Z. In this paper, the concept and merits of Share-Z are discussed. An experimental system developed to demonstrate the feasibility of Share-Z is also described.


2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


Nanophotonics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 41-74
Author(s):  
Bernard C. Kress ◽  
Ishan Chatterjee

AbstractThis paper is a review and analysis of the various implementation architectures of diffractive waveguide combiners for augmented reality (AR), mixed reality (MR) headsets, and smart glasses. Extended reality (XR) is another acronym frequently used to refer to all variants across the MR spectrum. Such devices have the potential to revolutionize how we work, communicate, travel, learn, teach, shop, and are entertained. Already, market analysts show very optimistic expectations on return on investment in MR, for both enterprise and consumer applications. Hardware architectures and technologies for AR and MR have made tremendous progress over the past five years, fueled by recent investment hype in start-ups and accelerated mergers and acquisitions by larger corporations. In order to meet such high market expectations, several challenges must be addressed: first, cementing primary use cases for each specific market segment and, second, achieving greater MR performance out of increasingly size-, weight-, cost- and power-constrained hardware. One such crucial component is the optical combiner. Combiners are often considered as critical optical elements in MR headsets, as they are the direct window to both the digital content and the real world for the user’s eyes.Two main pillars defining the MR experience are comfort and immersion. Comfort comes in various forms: –wearable comfort—reducing weight and size, pushing back the center of gravity, addressing thermal issues, and so on–visual comfort—providing accurate and natural 3-dimensional cues over a large field of view and a high angular resolution–vestibular comfort—providing stable and realistic virtual overlays that spatially agree with the user’s motion–social comfort—allowing for true eye contact, in a socially acceptable form factor.Immersion can be defined as the multisensory perceptual experience (including audio, display, gestures, haptics) that conveys to the user a sense of realism and envelopment. In order to effectively address both comfort and immersion challenges through improved hardware architectures and software developments, a deep understanding of the specific features and limitations of the human visual perception system is required. We emphasize the need for a human-centric optical design process, which would allow for the most comfortable headset design (wearable, visual, vestibular, and social comfort) without compromising the user’s sense of immersion (display, sensing, and interaction). Matching the specifics of the display architecture to the human visual perception system is key to bound the constraints of the hardware allowing for headset development and mass production at reasonable costs, while providing a delightful experience to the end user.


2006 ◽  
Vol 5 (3) ◽  
pp. 53-58 ◽  
Author(s):  
Roger K. C. Tan ◽  
Adrian David Cheok ◽  
James K. S. Teh

For better or worse, technological advancement has changed the world to the extent that at a professional level demands from the working executive required more hours either in the office or on business trips, on a social level the population (especially the younger generation) are glued to the computer either playing video games or surfing the internet. Traditional leisure activities, especially interaction with pets have been neglected or forgotten. This paper introduces Metazoa Ludens, a new computer mediated gaming system which allows pets to play new mixed reality computer games with humans via custom built technologies and applications. During the game-play the real pet chases after a physical movable bait in the real world within a predefined area; infra-red camera tracks the pets' movements and translates them into the virtual world of the system, corresponding them to the movement of a virtual pet avatar running after a virtual human avatar. The human player plays the game by controlling the human avatar's movements in the virtual world, this in turn relates to the movements of the physical movable bait in the real world which moves as the human avatar does. This unique way of playing computer game would give rise to a whole new way of mixed reality interaction between the pet owner and her pet thereby bringing technology and its influence on leisure and social activities to the next level


2021 ◽  
Author(s):  
Lohit Petikam

<p>Art direction is crucial for films and games to maintain a cohesive visual style. This involves carefully controlling visual elements like lighting and colour to unify the director's vision of a story. With today's computer graphics (CG) technology 3D animated films and games have become increasingly photorealistic. Unfortunately, art direction using CG tools remains laborious. Since realistic lighting can go against artistic intentions, art direction is almost impossible to preserve in real-time and interactive applications. New live applications like augmented and mixed reality (AR and MR) now demand automatically art-directed compositing in unpredictably changing real-world lighting. </p> <p>This thesis addresses the problem of dynamically art-directed 3D composition into real scenes. Realism is a basic component of art direction, so we begin by optimising scene geometry capture in realistic composites. We find low perceptual thresholds to retain perceived seamlessness with respect to optimised real-scene fidelity. We then propose new techniques for automatically preserving art-directed appearance and shading for virtual 3D characters. Our methods allow artists to specify their intended appearance for different lighting conditions. Unlike with previous work, artists can direct and animate stylistic edits to automatically adapt to changing real-world environments. We achieve this with a new framework for look development and art direction using a novel latent space of varied lighting conditions. For more dynamic stylised lighting, we also propose a new framework for art-directing stylised shadows using novel parametric shadow editing primitives. This is a first approach that preserves art direction and stylisation under varied lighting in AR/MR.</p>


2010 ◽  
Vol 19 (2) ◽  
pp. 151-171 ◽  
Author(s):  
Emily Troscianko

We read in a linear fashion, page by page, and we seem also to experience the world around us thus, moment by moment. But research on visual perception shows that perceptual experience is not pictorially representational: it does not consist in a linear, cumulative, totalizing process of building up a stream of internal picture-like representations. Current enactive, or sensorimotor, theories describe vision and imagination as operating through interactive potentiality. Kafka’s texts, which evoke perception as non-pictorial, provide scope for investigating the close links between vision and imagination in the context of the reading of fiction. Kafka taps into the fundamental perceptual processes by which we experience external and imagined worlds, by evoking fictional worlds through the characters’ perceptual enaction of them. The temporality of Kafka’s narratives draws us in by making concessions to how we habitually create ‘proper’, linear narratives out of experience, as reflected in traditional Realist narratives. However, Kafka also unsettles these processes of narrativization, showing their inadequacies and superfluities. Kafka’s works engage the reader’s imagination so powerfully because they correspond to the truth of perceptual experience, rather than merely to the fictions we conventionally make of it. Yet these texts also unsettle because we are unused to thinking of the real world as being just how these truly realistic, Kafkaesque worlds are: inadmissible of a complete, linear narrative, because always emerging when looked for, just in time.


Sign in / Sign up

Export Citation Format

Share Document