Automatic calibration of an optical see-through head-mounted display for augmented reality applications in computer-assisted interventions

Author(s):  
Michael Figl ◽  
Christopher Ede ◽  
Wolfgang Birkfellner ◽  
Johann Hummel ◽  
Rudolf Seemann ◽  
...  
2021 ◽  
Author(s):  
Aria M Jamshidi ◽  
Vyacheslav Makler ◽  
Michael Y Wang

Abstract Augmented reality (AR) is a novel technology for spine navigation. This tracking camera-integrated head-mounted display (HMD) represents a novel stereotactic computer navigation modality that has demonstrated excellent precision and accuracy with spinal instrumentation.1 Standard computer-assisted spine navigation systems have two major shortcomings: attention shift and line-of-sight limitations. The HMD allows visualization of the surgical field and navigation data concurrently in the same field of view.2,3 However, the use of AR in spine surgery has been limited to use for instrumentation, not for endoscopy.  Fully endoscopic transforaminal interbody fusion under conscious sedation is an effective treatment option for degenerative spondylolisthesis and spinal stenosis. Although this technique has a steep learning curve, the advantages are vast, including preservation of normal tissue, smaller incisional requirement, and reduced postoperative pain, all enabling rapid recovery after surgery. As with other endoscopic spine surgeries, this procedure has a steep learning curve and requires a robust understanding of foraminal anatomy in order to safely access the disc space.4,5 However, with the introduction of AR, the safety and precision of this procedure could be greatly improved upon.  In this video, we present a case of a 60-yr-old female who presented with a grade 1 spondylolisthesis and severe spinal stenosis and was treated with an L4-L5 interbody fusion. All instrumentation steps and localization for the endoscopic portion of the case were performed with assistance from the AR-HMD system. Informed written consent was obtained from the patient. The participant and any identifiable individuals consented to the publication of his/her image.


Author(s):  
Eugene Hayden ◽  
Kang Wang ◽  
Chengjie Wu ◽  
Shi Cao

This study explores the design, implementation, and evaluation of an Augmented Reality (AR) prototype that assists novice operators in performing procedural tasks in simulator environments. The prototype uses an optical see-through head-mounted display (OST HMD) in conjunction with a simulator display to supplement sequences of interactive visual and attention-guiding cues to the operator’s field of view. We used a 2x2 within-subject design to test two conditions: with/without AR-cues, each condition had a voice assistant and two procedural tasks (preflight and landing). An experiment examined twenty-six novice operators. The results demonstrated that augmented reality had benefits in terms of improved situation awareness and accuracy, however, it yielded longer task completion time by creating a speed-accuracy trade-off effect in favour of accuracy. No significant effect on mental workload is found. The results suggest that augmented reality systems have the potential to be used by a wider audience of operators.


2021 ◽  
Author(s):  
Nina Rohrbach ◽  
Joachim Hermsdörfer ◽  
Lisa-Marie Huber ◽  
Annika Thierfelder ◽  
Gavin Buckingham

AbstractAugmented reality, whereby computer-generated images are overlaid onto the physical environment, is becoming significant part of the world of education and training. Little is known, however, about how these external images are treated by the sensorimotor system of the user – are they fully integrated into the external environmental cues, or largely ignored by low-level perceptual and motor processes? Here, we examined this question in the context of the size–weight illusion (SWI). Thirty-two participants repeatedly lifted and reported the heaviness of two cubes of unequal volume but equal mass in alternation. Half of the participants saw semi-transparent equally sized holographic cubes superimposed onto the physical cubes through a head-mounted display. Fingertip force rates were measured prior to lift-off to determine how the holograms influenced sensorimotor prediction, while verbal reports of heaviness after each lift indicated how the holographic size cues influenced the SWI. As expected, participants who lifted without augmented visual cues lifted the large object at a higher rate of force than the small object on early lifts and experienced a robust SWI across all trials. In contrast, participants who lifted the (apparently equal-sized) augmented cubes used similar force rates for each object. Furthermore, they experienced no SWI during the first lifts of the objects, with a SWI developing over repeated trials. These results indicate that holographic cues initially dominate physical cues and cognitive knowledge, but are dismissed when conflicting with cues from other senses.


2017 ◽  
Vol 26 (1) ◽  
pp. 16-41 ◽  
Author(s):  
Jonny Collins ◽  
Holger Regenbrecht ◽  
Tobias Langlotz

Virtual and augmented reality, and other forms of mixed reality (MR), have become a focus of attention for companies and researchers. Before they can become successful in the market and in society, those MR systems must be able to deliver a convincing, novel experience for the users. By definition, the experience of mixed reality relies on the perceptually successful blending of reality and virtuality. Any MR system has to provide a sensory, in particular visually coherent, set of stimuli. Therefore, issues with visual coherence, that is, a discontinued experience of a MR environment, must be avoided. While it is very easy for a user to detect issues with visual coherence, it is very difficult to design and implement a system for coherence. This article presents a framework and exemplary implementation of a systematic enquiry into issues with visual coherence and possible solutions to address those issues. The focus is set on head-mounted display-based systems, notwithstanding its applicability to other types of MR systems. Our framework, together with a systematic discussion of tangible issues and solutions for visual coherence, aims at guiding developers of mixed reality systems for better and more effective user experiences.


2006 ◽  
Vol 5 (3) ◽  
pp. 33-39 ◽  
Author(s):  
Seokhee Jeon ◽  
Hyeongseop Shim ◽  
Gerard J. Kim

In this paper, we have investigated the comparative usability among three different viewing configurations of augmented reality (AR) system that uses a desktop monitor instead of a head mounted display. In many cases, due to operational or cost reasons, the use of head mounted displays may not be viable. Such a configuration is bound to cause usability problems because of the mismatch in the user's proprioception, scale, hand eye coordination, and the reduced 3D depth perception. We asked a pool of subjects to carry out an object manipulation task in three different desktop AR set ups. We measured the subject's task performance and surveyed for the perceived usability and preference. Our results indicated that placing a fixed camera in the back of the user was the best option for convenience and attaching a camera on the user�s head for task performance. The results should provide a valuable guide for designing desktop augmented reality systems without head mounted displays


10.29007/72d4 ◽  
2018 ◽  
Author(s):  
He Liu ◽  
Edouard Auvinet ◽  
Joshua Giles ◽  
Ferdinando Rodriguez Y Baena

Computer Aided Surgery (CAS) is helpful, but it clutters an already overcrowded operating theatre, and tends to disrupt the workflow of conventional surgery. In order to provide seamless computer assistance with improved immersion and a more natural surgical workflow, we propose an augmented-reality based navigation system for CAS. Here, we choose to focus on the proximal femoral anatomy, which we register to a plan by processing depth information of the surgical site captured by a commercial depth camera. Intra-operative three-dimensional surgical guidance is then provided to the surgeon through a commercial augmented reality headset, to drill a pilot hole in the femoral head, so that the user can perform the operation without additional physical guides. The user can interact intuitively with the system by simple gestures and voice commands, resulting in a more natural workflow. To assess the surgical accuracy of the proposed setup, 30 experiments of pilot hole drilling were performed on femur phantoms. The position and the orientation of the drilled guide holes were measured and compared with the preoperative plan, and the mean errors were within 2mm and 2°, results which are in line with commercial computer assisted orthopedic systems today.


Sign in / Sign up

Export Citation Format

Share Document