Guest Editors' Introduction: Moving Mixed Reality into the Real World

2005 ◽  
Vol 25 (6) ◽  
pp. 22-23 ◽  
Author(s):  
B. MacIntyre ◽  
M.A. Livingston
Keyword(s):  
2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


2006 ◽  
Vol 5 (3) ◽  
pp. 53-58 ◽  
Author(s):  
Roger K. C. Tan ◽  
Adrian David Cheok ◽  
James K. S. Teh

For better or worse, technological advancement has changed the world to the extent that at a professional level demands from the working executive required more hours either in the office or on business trips, on a social level the population (especially the younger generation) are glued to the computer either playing video games or surfing the internet. Traditional leisure activities, especially interaction with pets have been neglected or forgotten. This paper introduces Metazoa Ludens, a new computer mediated gaming system which allows pets to play new mixed reality computer games with humans via custom built technologies and applications. During the game-play the real pet chases after a physical movable bait in the real world within a predefined area; infra-red camera tracks the pets' movements and translates them into the virtual world of the system, corresponding them to the movement of a virtual pet avatar running after a virtual human avatar. The human player plays the game by controlling the human avatar's movements in the virtual world, this in turn relates to the movements of the physical movable bait in the real world which moves as the human avatar does. This unique way of playing computer game would give rise to a whole new way of mixed reality interaction between the pet owner and her pet thereby bringing technology and its influence on leisure and social activities to the next level


2020 ◽  
Vol 3 (1) ◽  
pp. 9-10
Author(s):  
Rehan Ahmed Khan

In the field of surgery, major changes that have occurred include the advent of minimally invasive surgery and the realization of the importance of the ‘systems’ in the surgical care of the patient (Pierorazio & Allaf, 2009). Challenges in surgical training are two-fold: (i) to train the surgical residents to manage a patient clinically (ii) to train them in operative skills (Singh & Darzi,2013). In Pakistan, another issue with surgical training is that we have the shortest duration of surgical training in general surgery of four years only, compared to six to eight years in Europe and America (Zafar & Rana, 2013). Along with it, the smaller number of patients to surgical residents’ ratio is also an issue in surgical training. This warrants formal training outside the operation room. It has been reported by many authors that changes are required in the current surgical training system due to the significant deficiencies in the graduating surgeon (Carlsen et al., 2014; Jarman et al., 2009; Parsons, Blencowe, Hollowood, & Grant, 2011). Considering surgical training, it is imperative that a surgeon is competent in clinical management and operative skills at the end of the surgical training. To achieve this outcome in this challenging scenario, a resident surgeon should be provided with the opportunities of training outside the operation theatre, before s/he can perform procedures on a real patient. The need for this training was felt more when the Institute of Medicine in the USA published a report, ‘To Err is Human’ (Stelfox, Palmisani, Scurlock, Orav, & Bates, 2006), with an aim to reduce medical errors. This is required for better training and objective assessment of the surgical residents. The options for this training include but are not limited to the use of mannequins, virtual patients, virtual simulators, virtual reality, augmented reality, and mixed reality. Simulation is a technique to substitute or add to real experiences with guided ones, often immersive in nature, that reproduce substantial aspects of the real world in a fully interactive way. Mannequins, virtual simulators are in use for a long time now. They are available in low fidelity to high fidelity mannequins and virtual simulators and help residents understand the surgical anatomy, operative site and practice their skills. Virtual patients can be discussed with students in a simple format of the text, pictures, and videos as case files available online, or in the form of customized software applications based on algorithms. In a study done by Courtielle et al, they reported that knowledge retention is increased in residents when it is delivered through virtual patients as compared to lecturing (Courteille et al., 2018).But learning the skills component requires hands-on practice. This gap can be bridged with virtual, augmented, or mixed reality. There are three types of virtual reality (VR) technologies: (i) non-immersive, (ii) semi-immersive, and (iii) fully immersive. Non-immersive (VR) involves the use of software and computers. In semi-immersive and immersive VR, the virtual image is presented through the head-mounted display(HMD), the difference being that in the fully immersive type, the virtual image is completely obscured from the actual world. Using handheld devices with haptic feedback the trainee can perform a procedure in the virtual environment (Douglas, Wilke, Gibson, Petricoin, & Liotta, 2017). Augmented reality (AR) can be divided into complete AR or mixed reality (MR). Through AR and MR, a trainee can see a virtual and a real-world image at the same time, making it easy for the supervisor to explain the steps of the surgery. Similar to VR, in AR and MR the user wears an HMD that shows both images. In AR, the virtual image is transparent whereas, in MR, it appears solid (Douglas et al., 2017). Virtual augmented and mixed reality has more potential to train surgeons as they provide fidelity very close to the real situation and require fewer physical resources and space compared to the simulators. But they are costlier, and affordability is an issue. To overcome this, low-cost solutions to virtual reality have been developed. It is high time that we also start thinking on the same lines and develop this means of training our surgeons at an affordable cost.


2021 ◽  
Author(s):  
◽  
Regan Petrie

<p>Early, intense practice of functional, repetitive rehabilitation interventions has shown positive results towards lower-limb recovery for stroke patients. However, long-term engagement in daily physical activity is necessary to maximise the physical and cognitive benefits of rehabilitation. The mundane, repetitive nature of traditional physiotherapy interventions and other personal, environmental and physical elements create barriers to participation. It is well documented that stroke patients engage in as little as 30% of their rehabilitation therapies. Digital gamified systems have shown positive results towards addressing these barriers of engagement in rehabilitation, but there is a lack of low-cost commercially available systems that are designed and personalised for home use. At the same time, emerging mixed reality technologies offer the ability to seamlessly integrate digital objects into the real world, generating an immersive, unique virtual world that leverages the physicality of the real world for a personalised, engaging experience.  This thesis explored how the design of an augmented reality exergame can facilitate engagement in independent lower-limb stroke rehabilitation. Our system converted prescribed exercises into active gameplay using commercially available augmented reality mobile technology. Such a system introduced an engaging, interactive alternative to existing mundane physiotherapy exercises.  The development of the system was based on a user-centered iterative design process. The involvement of health care professionals and stroke patients throughout each stage of the design and development process helped understand users’ needs, requirements and environment to refine the system and ensure its validity as a substitute for traditional rehabilitation interventions.  The final output was an augmented reality exergame that progressively facilitates sit-to-stand exercises by offering immersive interactions with digital exotic wildlife. We hypothesize that the immersive, active nature of a mobile, mixed reality exergame will increase engagement in independent task training for lower-limb rehabilitation.</p>


2021 ◽  
Vol 2 ◽  
Author(s):  
Holly C. Gagnon ◽  
Yu Zhao ◽  
Matthew Richardson ◽  
Grant D. Pointon ◽  
Jeanine K. Stefanucci ◽  
...  

Measures of perceived affordances—judgments of action capabilities—are an objective way to assess whether users perceive mediated environments similarly to the real world. Previous studies suggest that judgments of stepping over a virtual gap using augmented reality (AR) are underestimated relative to judgments of real-world gaps, which are generally overestimated. Across three experiments, we investigated whether two factors associated with AR devices contributed to the observed underestimation: weight and field of view (FOV). In the first experiment, observers judged whether they could step over virtual gaps while wearing the HoloLens (virtual gaps) or not (real-world gaps). The second experiment tested whether weight contributes to underestimation of perceived affordances by having participants wear the HoloLens during judgments of both virtual and real gaps. We replicated the effect of underestimation of step capabilities in AR as compared to the real world in both Experiments 1 and 2. The third experiment tested whether FOV influenced judgments by simulating a narrow (similar to the HoloLens) FOV in virtual reality (VR). Judgments made with a reduced FOV were compared to judgments made with the wider FOV of the HTC Vive Pro. The results showed relative underestimation of judgments of stepping over gaps in narrow vs. wide FOV VR. Taken together, the results suggest that there is little influence of weight of the HoloLens on perceived affordances for stepping, but that the reduced FOV of the HoloLens may contribute to the underestimation of stepping affordances observed in AR.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1123
Author(s):  
David Jurado ◽  
Juan M. Jurado ◽  
Lidia Ortega ◽  
Francisco R. Feito

Mixed reality (MR) enables a novel way to visualize virtual objects on real scenarios considering physical constraints. This technology arises with other significant advances in the field of sensors fusion for human-centric 3D capturing. Recent advances for scanning the user environment, real-time visualization and 3D vision using ubiquitous systems like smartphones allow us to capture 3D data from the real world. In this paper, a disruptive application for assessing the status of indoor infrastructures is proposed. The installation and maintenance of hidden facilities such as water pipes, electrical lines and air conditioning tubes, which are usually occluded behind the wall, supposes tedious and inefficient tasks. Most of these infrastructures are digitized but they cannot be visualized onsite. In this research, we focused on the development of a new application (GEUINF) to be launched on smartphones that are capable of capturing 3D data of the real world by depth sensing. This information is relevant to determine the user position and orientation. Although previous approaches used fixed markers for this purpose, our application enables the estimation of both parameters with a centimeter accuracy without them. This novelty is possible since our method is based on a matching process between reconstructed walls of the real world and 3D planes of the replicated world in a virtual environment. Our markerless approach is based on scanning planar surfaces of the user environment and then, these are geometrically aligned with their corresponding virtual 3D entities. In a preprocessing phase, the 2D CAD geometry available from an architectural project is used to generate 3D models of an indoor building structure. In real time, these virtual elements are tracked with the real ones modeled by using ARCore library. Once the alignment between virtual and real worlds is done, the application enables the visualization, navigation and interaction with the virtual facility networks in real-time. Thus, our method may be used by private companies and public institutions responsible of the indoor facilities management and also may be integrated with other applications focused on indoor navigation.


Author(s):  
Mark Pegrum

What is it? Augmented Reality (AR) bridges the real and the digital. It is part of the Extended Reality (XR) spectrum of immersive technological interfaces. At one end of the continuum, Virtual Reality (VR) immerses users in fully digital simulations which effectively substitute for the real world. At the other end of the continuum, AR allows users to remain immersed in the real world while superimposing digital overlays on the world. The term mixed reality, meanwhile, is sometimes used as an alternative to AR and sometimes as an alternative to XR.


Lex Russica ◽  
2020 ◽  
pp. 86-96
Author(s):  
E. E. Bogdanova

In the paper, the author notes that the development of modern technologies, including artificial intelligence, unmanned transport, robotics, portable and embedded digital devices, already has a great impact on the daily life of a person and can fundamentally change the existing social order in the near future.Virtual reality as a technology was born in the cross-section of research in the field of three-dimensional computer graphics and human-machine interaction. The spectrum of mixed reality includes the real world itself, the one that is before our eyes, the world of augmented reality — an improved reality that results from the introduction of sensory data into the field of perception in order to supplement information about the surrounding world and improve the perception of information; the world of virtual reality, which is created using technologies that provide full immersion in the environment. In some studies, augmented virtuality is also included in the spectrum, which implies the addition of virtual reality with elements of the real world (combining the virtual and real world).The paper substantiates the conclusion that in the near future both the legislator and judicial practice will have to find a balance between the interests of the creators of virtual worlds and virtual artists exclusive control over their virtual works, on the one hand, and society in using these virtual works and their development, on the other hand. It is necessary to allow users to participate, interact and create new forms of creative expression in the virtual environment.The author concludes that a broader interpretation of the fair use doctrine should be applied in this area, especially for those virtual worlds and virtual objects that imitate the real world and reality. However, it is necessary to distinguish between cases where the protection of such objects justifies licensing and those where it is advisable to encourage unrestricted use of the results for the further development of new technologies. 


10.5334/bck.i ◽  
2021 ◽  
pp. 93-103
Author(s):  
Mafkereseb Kassahun Bekele

Virtual heritage (VH) is one of the few domains to adopt immersive reality technologies at early stages, with a significant number of studies employing the technologies for various application themes. More specifically, virtual reality has persisted as a de facto immersive reality technology for virtual reconstruction and virtual museums. In recent years, however, mixed reality (MxR) has attracted attention from the VH community following the introduction of new devices, such as Microsoft HoloLens, to the technological landscape of immersive reality. Two variant perceptions of MxR have been observed in the literature over the past two decades. First, MxR is perceived as an umbrella/collective term for a virtual reality (VR) and augmented reality (AR) environment. Second, it is also presented as a distinctive form of immersive reality that enables merging virtual elements with their real-world counterparts. These perceptions influence our choice of immersive reality technology, interaction design, and implementation, and the overall objective of VH applications. To address these concerns, this chapter attempts to answer two critical questions: (1) what MxR from VH perspective is and (2) whether MxR is just a form of immersive reality that serves as a bridge to connect the real world with a virtual one or a fusion of both that neither the real nor the virtual world would have meaning without a contextual relationship and interaction with each other. To this end, this chapter will review VH applications and literature from the past few years and identify how MxR is presented. It will also suggest how the VH community can benefit from MxR and discuss limitations in existing technology and identify some areas and direction for future research in the domain.


Sign in / Sign up

Export Citation Format

Share Document