Analysis of the visual perception conflicts in the mixed reality systems with the real-world illumination parameters restoration

Author(s):  
Andrey Zhdanov ◽  
Dmitry Zhdanov ◽  
Igor S. Potemin ◽  
Nikolay Bogdanov
2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


2006 ◽  
Vol 5 (3) ◽  
pp. 53-58 ◽  
Author(s):  
Roger K. C. Tan ◽  
Adrian David Cheok ◽  
James K. S. Teh

For better or worse, technological advancement has changed the world to the extent that at a professional level demands from the working executive required more hours either in the office or on business trips, on a social level the population (especially the younger generation) are glued to the computer either playing video games or surfing the internet. Traditional leisure activities, especially interaction with pets have been neglected or forgotten. This paper introduces Metazoa Ludens, a new computer mediated gaming system which allows pets to play new mixed reality computer games with humans via custom built technologies and applications. During the game-play the real pet chases after a physical movable bait in the real world within a predefined area; infra-red camera tracks the pets' movements and translates them into the virtual world of the system, corresponding them to the movement of a virtual pet avatar running after a virtual human avatar. The human player plays the game by controlling the human avatar's movements in the virtual world, this in turn relates to the movements of the physical movable bait in the real world which moves as the human avatar does. This unique way of playing computer game would give rise to a whole new way of mixed reality interaction between the pet owner and her pet thereby bringing technology and its influence on leisure and social activities to the next level


2010 ◽  
Vol 19 (2) ◽  
pp. 151-171 ◽  
Author(s):  
Emily Troscianko

We read in a linear fashion, page by page, and we seem also to experience the world around us thus, moment by moment. But research on visual perception shows that perceptual experience is not pictorially representational: it does not consist in a linear, cumulative, totalizing process of building up a stream of internal picture-like representations. Current enactive, or sensorimotor, theories describe vision and imagination as operating through interactive potentiality. Kafka’s texts, which evoke perception as non-pictorial, provide scope for investigating the close links between vision and imagination in the context of the reading of fiction. Kafka taps into the fundamental perceptual processes by which we experience external and imagined worlds, by evoking fictional worlds through the characters’ perceptual enaction of them. The temporality of Kafka’s narratives draws us in by making concessions to how we habitually create ‘proper’, linear narratives out of experience, as reflected in traditional Realist narratives. However, Kafka also unsettles these processes of narrativization, showing their inadequacies and superfluities. Kafka’s works engage the reader’s imagination so powerfully because they correspond to the truth of perceptual experience, rather than merely to the fictions we conventionally make of it. Yet these texts also unsettle because we are unused to thinking of the real world as being just how these truly realistic, Kafkaesque worlds are: inadmissible of a complete, linear narrative, because always emerging when looked for, just in time.


2020 ◽  
Vol 3 (1) ◽  
pp. 9-10
Author(s):  
Rehan Ahmed Khan

In the field of surgery, major changes that have occurred include the advent of minimally invasive surgery and the realization of the importance of the ‘systems’ in the surgical care of the patient (Pierorazio & Allaf, 2009). Challenges in surgical training are two-fold: (i) to train the surgical residents to manage a patient clinically (ii) to train them in operative skills (Singh & Darzi,2013). In Pakistan, another issue with surgical training is that we have the shortest duration of surgical training in general surgery of four years only, compared to six to eight years in Europe and America (Zafar & Rana, 2013). Along with it, the smaller number of patients to surgical residents’ ratio is also an issue in surgical training. This warrants formal training outside the operation room. It has been reported by many authors that changes are required in the current surgical training system due to the significant deficiencies in the graduating surgeon (Carlsen et al., 2014; Jarman et al., 2009; Parsons, Blencowe, Hollowood, & Grant, 2011). Considering surgical training, it is imperative that a surgeon is competent in clinical management and operative skills at the end of the surgical training. To achieve this outcome in this challenging scenario, a resident surgeon should be provided with the opportunities of training outside the operation theatre, before s/he can perform procedures on a real patient. The need for this training was felt more when the Institute of Medicine in the USA published a report, ‘To Err is Human’ (Stelfox, Palmisani, Scurlock, Orav, & Bates, 2006), with an aim to reduce medical errors. This is required for better training and objective assessment of the surgical residents. The options for this training include but are not limited to the use of mannequins, virtual patients, virtual simulators, virtual reality, augmented reality, and mixed reality. Simulation is a technique to substitute or add to real experiences with guided ones, often immersive in nature, that reproduce substantial aspects of the real world in a fully interactive way. Mannequins, virtual simulators are in use for a long time now. They are available in low fidelity to high fidelity mannequins and virtual simulators and help residents understand the surgical anatomy, operative site and practice their skills. Virtual patients can be discussed with students in a simple format of the text, pictures, and videos as case files available online, or in the form of customized software applications based on algorithms. In a study done by Courtielle et al, they reported that knowledge retention is increased in residents when it is delivered through virtual patients as compared to lecturing (Courteille et al., 2018).But learning the skills component requires hands-on practice. This gap can be bridged with virtual, augmented, or mixed reality. There are three types of virtual reality (VR) technologies: (i) non-immersive, (ii) semi-immersive, and (iii) fully immersive. Non-immersive (VR) involves the use of software and computers. In semi-immersive and immersive VR, the virtual image is presented through the head-mounted display(HMD), the difference being that in the fully immersive type, the virtual image is completely obscured from the actual world. Using handheld devices with haptic feedback the trainee can perform a procedure in the virtual environment (Douglas, Wilke, Gibson, Petricoin, & Liotta, 2017). Augmented reality (AR) can be divided into complete AR or mixed reality (MR). Through AR and MR, a trainee can see a virtual and a real-world image at the same time, making it easy for the supervisor to explain the steps of the surgery. Similar to VR, in AR and MR the user wears an HMD that shows both images. In AR, the virtual image is transparent whereas, in MR, it appears solid (Douglas et al., 2017). Virtual augmented and mixed reality has more potential to train surgeons as they provide fidelity very close to the real situation and require fewer physical resources and space compared to the simulators. But they are costlier, and affordability is an issue. To overcome this, low-cost solutions to virtual reality have been developed. It is high time that we also start thinking on the same lines and develop this means of training our surgeons at an affordable cost.


2021 ◽  
Author(s):  
◽  
Regan Petrie

<p>Early, intense practice of functional, repetitive rehabilitation interventions has shown positive results towards lower-limb recovery for stroke patients. However, long-term engagement in daily physical activity is necessary to maximise the physical and cognitive benefits of rehabilitation. The mundane, repetitive nature of traditional physiotherapy interventions and other personal, environmental and physical elements create barriers to participation. It is well documented that stroke patients engage in as little as 30% of their rehabilitation therapies. Digital gamified systems have shown positive results towards addressing these barriers of engagement in rehabilitation, but there is a lack of low-cost commercially available systems that are designed and personalised for home use. At the same time, emerging mixed reality technologies offer the ability to seamlessly integrate digital objects into the real world, generating an immersive, unique virtual world that leverages the physicality of the real world for a personalised, engaging experience.  This thesis explored how the design of an augmented reality exergame can facilitate engagement in independent lower-limb stroke rehabilitation. Our system converted prescribed exercises into active gameplay using commercially available augmented reality mobile technology. Such a system introduced an engaging, interactive alternative to existing mundane physiotherapy exercises.  The development of the system was based on a user-centered iterative design process. The involvement of health care professionals and stroke patients throughout each stage of the design and development process helped understand users’ needs, requirements and environment to refine the system and ensure its validity as a substitute for traditional rehabilitation interventions.  The final output was an augmented reality exergame that progressively facilitates sit-to-stand exercises by offering immersive interactions with digital exotic wildlife. We hypothesize that the immersive, active nature of a mobile, mixed reality exergame will increase engagement in independent task training for lower-limb rehabilitation.</p>


2021 ◽  
Vol 2 ◽  
Author(s):  
Holly C. Gagnon ◽  
Yu Zhao ◽  
Matthew Richardson ◽  
Grant D. Pointon ◽  
Jeanine K. Stefanucci ◽  
...  

Measures of perceived affordances—judgments of action capabilities—are an objective way to assess whether users perceive mediated environments similarly to the real world. Previous studies suggest that judgments of stepping over a virtual gap using augmented reality (AR) are underestimated relative to judgments of real-world gaps, which are generally overestimated. Across three experiments, we investigated whether two factors associated with AR devices contributed to the observed underestimation: weight and field of view (FOV). In the first experiment, observers judged whether they could step over virtual gaps while wearing the HoloLens (virtual gaps) or not (real-world gaps). The second experiment tested whether weight contributes to underestimation of perceived affordances by having participants wear the HoloLens during judgments of both virtual and real gaps. We replicated the effect of underestimation of step capabilities in AR as compared to the real world in both Experiments 1 and 2. The third experiment tested whether FOV influenced judgments by simulating a narrow (similar to the HoloLens) FOV in virtual reality (VR). Judgments made with a reduced FOV were compared to judgments made with the wider FOV of the HTC Vive Pro. The results showed relative underestimation of judgments of stepping over gaps in narrow vs. wide FOV VR. Taken together, the results suggest that there is little influence of weight of the HoloLens on perceived affordances for stepping, but that the reduced FOV of the HoloLens may contribute to the underestimation of stepping affordances observed in AR.


2005 ◽  
Vol 25 (6) ◽  
pp. 22-23 ◽  
Author(s):  
B. MacIntyre ◽  
M.A. Livingston
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1123
Author(s):  
David Jurado ◽  
Juan M. Jurado ◽  
Lidia Ortega ◽  
Francisco R. Feito

Mixed reality (MR) enables a novel way to visualize virtual objects on real scenarios considering physical constraints. This technology arises with other significant advances in the field of sensors fusion for human-centric 3D capturing. Recent advances for scanning the user environment, real-time visualization and 3D vision using ubiquitous systems like smartphones allow us to capture 3D data from the real world. In this paper, a disruptive application for assessing the status of indoor infrastructures is proposed. The installation and maintenance of hidden facilities such as water pipes, electrical lines and air conditioning tubes, which are usually occluded behind the wall, supposes tedious and inefficient tasks. Most of these infrastructures are digitized but they cannot be visualized onsite. In this research, we focused on the development of a new application (GEUINF) to be launched on smartphones that are capable of capturing 3D data of the real world by depth sensing. This information is relevant to determine the user position and orientation. Although previous approaches used fixed markers for this purpose, our application enables the estimation of both parameters with a centimeter accuracy without them. This novelty is possible since our method is based on a matching process between reconstructed walls of the real world and 3D planes of the replicated world in a virtual environment. Our markerless approach is based on scanning planar surfaces of the user environment and then, these are geometrically aligned with their corresponding virtual 3D entities. In a preprocessing phase, the 2D CAD geometry available from an architectural project is used to generate 3D models of an indoor building structure. In real time, these virtual elements are tracked with the real ones modeled by using ARCore library. Once the alignment between virtual and real worlds is done, the application enables the visualization, navigation and interaction with the virtual facility networks in real-time. Thus, our method may be used by private companies and public institutions responsible of the indoor facilities management and also may be integrated with other applications focused on indoor navigation.


Author(s):  
Mark Pegrum

What is it? Augmented Reality (AR) bridges the real and the digital. It is part of the Extended Reality (XR) spectrum of immersive technological interfaces. At one end of the continuum, Virtual Reality (VR) immerses users in fully digital simulations which effectively substitute for the real world. At the other end of the continuum, AR allows users to remain immersed in the real world while superimposing digital overlays on the world. The term mixed reality, meanwhile, is sometimes used as an alternative to AR and sometimes as an alternative to XR.


Sign in / Sign up

Export Citation Format

Share Document