Smart Glasses for Neurosurgical Navigation by Augmented Reality

2018 ◽  
Vol 15 (5) ◽  
pp. 551-556 ◽  
Author(s):  
Keisuke Maruyama ◽  
Eiju Watanabe ◽  
Taichi Kin ◽  
Kuniaki Saito ◽  
Atsushi Kumakiri ◽  
...  

Abstract BACKGROUND Wearable devices with heads-up displays or smart glasses can overlay images onto the sight of the wearer. This technology has never been applied to surgical navigation. OBJECTIVE To assess the applicability and accuracy of smart glasses for augmented reality (AR)-based neurosurgical navigation. METHODS Smart glasses were applied to AR-based neurosurgical navigation. Three-dimensional computer graphics were created based on preoperative magnetic resonance images and visualized in see-through smart glasses. Optical markers were attached to the smart glasses and the patient's head for accurate navigation. Two motion capture cameras were used for registration and continuous monitoring of the location of the smart glasses in relation to the patient's head. After the accuracy was assessed with a phantom, this technique was applied in 2 patients with brain tumors located in the brain surface. RESULTS A stereoscopic view by image overlay through the smart glasses was successful in the phantom and in both patients. Hands-free neuronavigation inside the operative field was available from any angles and distances. The targeting error in the phantom measured in 75 points ranged from 0.2 to 8.1 mm (mean 3.1 ± 1.9 mm, median 2.7 mm). The intraoperative targeting error between the visualized and real locations in the 2 patients (measured in 40 points) ranged from 0.6 to 4.9 mm (mean 2.1 ± 1.1 mm, median 1.8 mm). CONCLUSION Smart glasses enabled AR-based neurosurgical navigation in a hands-free fashion. Stereoscopic computer graphics of targeted brain tumors corresponding to the surgical field were clearly visualized during surgery.

1994 ◽  
Vol 14 (5) ◽  
pp. 749-762 ◽  
Author(s):  
Jean-François Mangin ◽  
Vincent Frouin ◽  
Isabelle Bloch ◽  
Bernard Bendriem ◽  
Jaime Lopez-Krahe

We propose a fully nonsupervised methodology dedicated to the fast registration of positron emission tomography (PET) and magnetic resonance images of the brain. First, discrete representations of the surfaces of interest (head or brain surface) are automatically extracted from both images. Then, a shape-independent surface-matching algorithm gives a rigid body transformation, which allows the transfer of information between both modalities. A three-dimensional (3D) extension of the chamfer-matching principle makes up the core of this surface-matching algorithm. The optimal transformation is inferred from the minimization of a quadratic generalized distance between discrete surfaces, taking into account between-modality differences in the localization of the segmented surfaces. The minimization process is efficiently performed via the precomputation of a 3D distance map. Validation studies using a dedicated brain-shaped phantom have shown that the maximum registration error was of the order of the PET pixel size (2 mm) for the wide variety of tested configurations. The software is routinely used today in a clinical context by the physicians of the Service Hospitalier Frédéric Joliot (>150 registrations performed). The entire registration process requires ∼5 min on a conventional workstation.


1990 ◽  
Vol 72 (3) ◽  
pp. 433-440 ◽  
Author(s):  
Xiaoping Hu ◽  
Kim K. Tan ◽  
David N. Levin ◽  
Simranjit Galhotra ◽  
John F. Mullan ◽  
...  

✓ Data from single 10-minute magnetic resonance scans were used to create three-dimensional (3-D) views of the surfaces of the brain and skin of 12 patients. In each case, these views were used to make a preoperative assessment of the relationship of lesions to brain surface structures associated with movement, sensation, hearing, and speech. Interactive software was written so that the user could “slice” through the 3-D computer model and inspect cross-sectional images at any level. A surgery simulation program was written so that surgeons were able to “rehearse” craniotomies on 3-D computer models before performing the actual operations. In each case, the qualitative accuracy of the 3-D views was confirmed by intraoperative inspection of the brain surface and by intraoperative electrophysiological mapping, when available.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7824
Author(s):  
Mónica García-Sevilla ◽  
Rafael Moreta-Martinez ◽  
David García-Mato ◽  
Alicia Pose-Diez-de-la-Lastra ◽  
Rubén Pérez-Mañanes ◽  
...  

Patient-specific instruments (PSIs) have become a valuable tool for osteotomy guidance in complex surgical scenarios such as pelvic tumor resection. They provide similar accuracy to surgical navigation systems but are generally more convenient and faster. However, their correct placement can become challenging in some anatomical regions, and it cannot be verified objectively during the intervention. Incorrect installations can result in high deviations from the planned osteotomy, increasing the risk of positive resection margins. In this work, we propose to use augmented reality (AR) to guide and verify PSIs placement. We designed an experiment to assess the accuracy provided by the system using a smartphone and the HoloLens 2 and compared the results with the conventional freehand method. The results showed significant differences, where AR guidance prevented high osteotomy deviations, reducing maximal deviation of 54.03 mm for freehand placements to less than 5 mm with AR guidance. The experiment was performed in two versions of a plastic three-dimensional (3D) printed phantom, one including a silicone layer to simulate tissue, providing more realism. We also studied how differences in shape and location of PSIs affect their accuracy, concluding that those with smaller sizes and a homogeneous target surface are more prone to errors. Our study presents promising results that prove AR’s potential to overcome the present limitations of PSIs conveniently and effectively.


2021 ◽  
Vol 21 (1) ◽  
pp. 15-29
Author(s):  
Lidiane Pereira ◽  
Wellingston C. Roberti Junior ◽  
Rodrigo L. S. Silva

In Augmented Reality systems, virtual objects are combined with real objects, both three dimensional, interactively and at run-time. In an ideal scenario, the user has the feeling that real and virtual objects coexist in the same space and is unable to differentiate the types of objects from each other. To achieve this goal, research on rendering techniques have been conducted in recent years. In this paper, we present a Systematic Literature Review aiming to identify the main characteristics concerning photorealism in Mixed and Augmented Reality systems to find research opportunities that can be further exploited or optimized. The objective is to verify if exists a definition of photorealism in Mixed and Augmented Reality. We present a theoreticalfundamental over the most used methods concerning realism in Computer Graphics. Also, we want to identify the most used methods and tools to enable photorealism in Mixed and Augmented Reality systems.


2021 ◽  
Vol 51 (2) ◽  
pp. E7
Author(s):  
Simon Skyrman ◽  
Marco Lai ◽  
Erik Edström ◽  
Gustav Burström ◽  
Petter Förander ◽  
...  

OBJECTIVE The aim of this study was to evaluate the accuracy (deviation from the target or intended path) and efficacy (insertion time) of an augmented reality surgical navigation (ARSN) system for insertion of biopsy needles and external ventricular drains (EVDs), two common neurosurgical procedures that require high precision. METHODS The hybrid operating room–based ARSN system, comprising a robotic C-arm with intraoperative cone-beam CT (CBCT) and integrated video tracking of the patient and instruments using nonobtrusive adhesive optical markers, was used. A 3D-printed skull phantom with a realistic gelatinous brain model containing air-filled ventricles and 2-mm spherical biopsy targets was obtained. After initial CBCT acquisition for target registration and planning, ARSN was used for 30 cranial biopsies and 10 EVD insertions. Needle positions were verified by CBCT. RESULTS The mean accuracy of the biopsy needle insertions (n = 30) was 0.8 mm ± 0.43 mm. The median path length was 39 mm (range 16–104 mm) and did not correlate to accuracy (p = 0.15). The median device insertion time was 149 seconds (range 87–233 seconds). The mean accuracy for the EVD insertions (n = 10) was 2.9 mm ± 0.8 mm at the tip with a 0.7° ± 0.5° angular deviation compared with the planned path, and the median insertion time was 188 seconds (range 135–400 seconds). CONCLUSIONS This study demonstrated that ARSN can be used for navigation of percutaneous cranial biopsies and EVDs with high accuracy and efficacy.


Proceedings ◽  
2019 ◽  
Vol 31 (1) ◽  
pp. 76
Author(s):  
Marc Codina ◽  
David Castells-Rufas ◽  
Jordi Carrabina ◽  
Iker Salmon ◽  
Néstor Ayuso ◽  
...  

Augmented reality is showing a continuous evolution due to the increasing number of smart glasses that are being used for different applications (e.g. training, marketing, industry, risk avoidance, etc.). In this paper, we present an implementation that uses augmented reality (AR) for emergency situations in smart buildings by means of indoor localization through the use of sub-GHz beacons. This also includes the mapping of emergency elements in the three-dimensional building, together with some example cases.


2019 ◽  
Author(s):  
Taoran Jiang ◽  
Dewang Yu ◽  
Yuqi Wang ◽  
Tao Zan ◽  
Shuyi Wang ◽  
...  

BACKGROUND Vascular localization is crucial for perforator flap transfer. Augmented reality offers a novel method to seamlessly combine real information with virtual objects created by computed tomographic angiography to help the surgeon “see through” the skin and precisely localize the perforator. The head-mounted display augmented reality system HoloLens (Microsoft) could facilitate augmented reality–based perforator localization for a more convenient and safe procedure. OBJECTIVE The aim of this study was to evaluate the precision of the HoloLens-based vascular localization system, as the most important performance indicator of a new localization system. METHODS The precision of the HoloLens-based vascular localization system was tested in a simulated operating room under different conditions with a three-dimensional (3D) printed model. The coordinates of five pairs of points on the vascular map that could be easily identified on the 3D printed model and virtual model were detected by a probe, and the distance between the corresponding points was calculated as the navigation error. RESULTS The mean errors were determined under different conditions, with a minimum error of 1.35 mm (SD 0.43) and maximum error of 3.18 mm (SD 1.32), which were within the clinically acceptable range. There were no significant differences in the errors obtained under different visual angles, different light intensities, or different states (static or motion). However, the error was larger when tested with light compared with that tested without light. CONCLUSIONS This precision evaluation demonstrated that the HoloLens system can precisely localize the perforator and potentially help the surgeon accomplish the operation. The authors recommend using HoloLens-based surgical navigation without light.


10.2196/16852 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e16852
Author(s):  
Taoran Jiang ◽  
Dewang Yu ◽  
Yuqi Wang ◽  
Tao Zan ◽  
Shuyi Wang ◽  
...  

Background Vascular localization is crucial for perforator flap transfer. Augmented reality offers a novel method to seamlessly combine real information with virtual objects created by computed tomographic angiography to help the surgeon “see through” the skin and precisely localize the perforator. The head-mounted display augmented reality system HoloLens (Microsoft) could facilitate augmented reality–based perforator localization for a more convenient and safe procedure. Objective The aim of this study was to evaluate the precision of the HoloLens-based vascular localization system, as the most important performance indicator of a new localization system. Methods The precision of the HoloLens-based vascular localization system was tested in a simulated operating room under different conditions with a three-dimensional (3D) printed model. The coordinates of five pairs of points on the vascular map that could be easily identified on the 3D printed model and virtual model were detected by a probe, and the distance between the corresponding points was calculated as the navigation error. Results The mean errors were determined under different conditions, with a minimum error of 1.35 mm (SD 0.43) and maximum error of 3.18 mm (SD 1.32), which were within the clinically acceptable range. There were no significant differences in the errors obtained under different visual angles, different light intensities, or different states (static or motion). However, the error was larger when tested with light compared with that tested without light. Conclusions This precision evaluation demonstrated that the HoloLens system can precisely localize the perforator and potentially help the surgeon accomplish the operation. The authors recommend using HoloLens-based surgical navigation without light.


Sign in / Sign up

Export Citation Format

Share Document