scholarly journals Navigation and visualisation with HoloLens in endovascular aortic repair

2018 ◽  
Vol 3 (3) ◽  
pp. 167-177 ◽  
Author(s):  
Verónica García-Vázquez ◽  
Felix von Haxthausen ◽  
Sonja Jäckle ◽  
Christian Schumann ◽  
Ivo Kuhlemann ◽  
...  

AbstractIntroductionEndovascular aortic repair (EVAR) is a minimal-invasive technique that prevents life-threatening rupture in patients with aortic pathologies by implantation of an endoluminal stent graft. During the endovascular procedure, device navigation is currently performed by fluoroscopy in combination with digital subtraction angiography. This study presents the current iterative process of biomedical engineering within the disruptive interdisciplinary project Nav EVAR, which includes advanced navigation, image techniques and augmented reality with the aim of reducing side effects (namely radiation exposure and contrast agent administration) and optimising visualisation during EVAR procedures. This article describes the current prototype developed in this project and the experiments conducted to evaluate it.MethodsThe current approach of the Nav EVAR project is guiding EVAR interventions in real-time with an electromagnetic tracking system after attaching a sensor on the catheter tip and displaying this information on Microsoft HoloLens glasses. This augmented reality technology enables the visualisation of virtual objects superimposed on the real environment. These virtual objects include three-dimensional (3D) objects (namely 3D models of the skin and vascular structures) and two-dimensional (2D) objects [namely orthogonal views of computed tomography (CT) angiograms, 2D images of 3D vascular models, and 2D images of a new virtual angioscopy whose appearance of the vessel wall follows that shown in ex vivo and in vivo angioscopies]. Specific external markers were designed to be used as landmarks in the registration process to map the tracking data and radiological data into a common space. In addition, the use of real-time 3D ultrasound (US) is also under evaluation in the Nav EVAR project for guiding endovascular tools and updating navigation with intraoperative imaging. US volumes are streamed from the US system to HoloLens and visualised at a certain distance from the probe by tracking augmented reality markers. A human model torso that includes a 3D printed patient-specific aortic model was built to provide a realistic test environment for evaluation of technical components in the Nav EVAR project. The solutions presented in this study were tested by using an US training model and the aortic-aneurysm phantom.ResultsDuring the navigation of the catheter tip in the US training model, the 3D models of the phantom surface and vessels were visualised on HoloLens. In addition, a virtual angioscopy was also built from a CT scan of the aortic-aneurysm phantom. The external markers designed for this study were visible in the CT scan and the electromagnetically tracked pointer fitted in each marker hole. US volumes of the US training model were sent from the US system to HoloLens in order to display them, showing a latency of 259±86 ms (mean±standard deviation).ConclusionThe Nav EVAR project tackles the problem of radiation exposure and contrast agent administration during EVAR interventions by using a multidisciplinary approach to guide the endovascular tools. Its current state presents several limitations such as the rigid alignment between preoperative data and the simulated patient. Nevertheless, the techniques shown in this study in combination with fibre Bragg gratings and optical coherence tomography are a promising approach to overcome the problems of EVAR interventions.

Author(s):  
Vivek Parashar

Augmented Reality is the technology using which we can integrate 3D virtual objects in our physical environment in real time. Augmented Reality helps us in bring the virtual world closer to our physical worlds and gives us the ability to interact with the surrounding. This paper will give you an idea that how Augmented Reality can transform Education Industry. In this paper we have used Augmented Reality to simplify the learning process and allow people to interact with 3D models with the help of gestures. This advancement in the technology is changing the way we interact with our surrounding, rather than watching videos or looking at a static diagram in your text book, Augmented Reality enables you to do more. So rather than putting someone in the animated world, the goal of augmented reality is to blend the virtual objects in the real world.


2019 ◽  
Vol 5 (1) ◽  
pp. 289-291 ◽  
Author(s):  
Felix von Haxthausen ◽  
Sonja Jäckle ◽  
Jan Strehlow ◽  
Floris Ernst ◽  
Verónica García-Vázquez

AbstractFluoroscopy and digital subtraction angiography provide guidance in endovascular aortic repair (EVAR) but introduce radiation exposure and require the administration of contrast agent. To overcome these disadvantages, previous studies proposed to display the pose of an electromagnetically (EM) tracked catheter tip within a three-dimensional virtual aorta on augmented reality (AR) glasses. For further guidance, we propose to create virtual angioscopy images based on the catheter tip pose within the aorta and to display them on HoloLens. The aorta was segmented from the computed tomography (CT) data using MeVisLab software. A landmarkbased registration allowed the calculation of the pose of the EM sensor in the CT coordinate system. The sensor pose was sent to MeVisLab running on a computer and a virtual angioscopy image was created at runtime based on the segmented aorta. When requested by HoloLens, the last encoded image was sent from MeVisLab to the AR glasses via Wi-Fi using a remote procedure call (gRPC), and then decoded and displayed on HoloLens. For evaluation purposes, the latency of transmitting and displaying the images was measured using two different lossy compression formats (namely JPEG and DXT1). A mean latency of 82 ms was measured for the JPEG format. On the other hand, using the DXT1 format, the mean latency was reduced by 87 %. This study proved the feasibility of creating pose-dependent virtual angioscopy images and displaying them on HoloLens. Additionally, the results showed that the DXT1 format outperformed the JPEG format regarding latency. The virtual angioscopy may add valuable additional information for guidance in radiation-sparing EVAR procedure approaches.


Author(s):  
S.A.D.Nimesha Nishadi Ashinshanie ◽  
Adhil Hazari ◽  
H. N. Rupasinghe ◽  
Dulmini P. Hettiarchchi ◽  
D. I. De Silva

2018 ◽  
Vol 23 (6) ◽  
pp. 99-113
Author(s):  
Sha LIU ◽  
Feng YANG ◽  
Shunxi WANG ◽  
Yu CHEN

Author(s):  
Yulia Fatma ◽  
Armen Salim ◽  
Regiolina Hayami

Along with the development, the application can be used as a medium for learning. Augmented Reality is a technology that combines two-dimensional’s virtual objects and three-dimensional’s virtual objects into a real three-dimensional’s  then projecting the virtual objects in real time and simultaneously. The introduction of Solar System’s material, students are invited to get to know the planets which are directly encourage students to imagine circumtances in the Solar System. Explenational of planets form and how the planets make the revolution and rotation in books are considered less material’s explanation because its only display objects in 2D. In addition, students can not practice directly in preparing the layout of the planets in the Solar System. By applying Augmented Reality Technology, information’s learning delivery can be clarified, because in these applications are combined the real world and the virtual world. Not only display the material, the application also display images of planets in 3D animation’s objects with audio.


2021 ◽  
Author(s):  
Madalyn Massey

Structure-from-Motion (SfM) is a photogrammetry process that creates 3D models from overlapping 2D images. This protocol focuses on its application related to geological and geophysical samples. The samples includes fossil, hand samples and rocks. This is a recommended practice to be used later for the publication on United States Geological Survey website.


1999 ◽  
Author(s):  
Dan Zetu ◽  
Pat Banerjee ◽  
Ali Akgunduz

Abstract The fast construction of a Virtual Factory model without using a CAD package can be made possible by using computer vision techniques. In order to create a realistic Virtual Manufacturing environment, especially when such a model has to be created in correlation to an existing facility, a reliable algorithm that extracts 3D models from camera images is needed, and this requires exact knowledge of the camera location when capturing images. In this paper, we describe an approach for depth recovery from 2D images based on tracking a camera within the environment. We also explore the extension of our telemetry-based algorithm to remote facility management, by tracking and synchronizing human motion on the shop floor with motion of an avatar in a Virtual Environment representing the same shop floor.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker

The method presented in this work reduces the frequency of virtual objects incorrectly occluding real-world objects in Augmented Reality (AR) applications. Current AR rendering methods cannot properly represent occlusion between real and virtual objects because the objects are not represented in a common coordinate system. These occlusion errors can lead users to have an incorrect perception of the environment around them when using an AR application, namely not knowing a real-world object is present due to a virtual object incorrectly occluding it and incorrect perception of depth or distance by the user due to incorrect occlusions. The authors of this paper present a method that brings both real-world and virtual objects into a common coordinate system so that distant virtual objects do not obscure nearby real-world objects in an AR application. This method captures and processes RGB-D data in real-time, allowing the method to be used in a variety of environments and scenarios. A case study shows the effectiveness and usability of the proposed method to correctly occlude real-world and virtual objects and provide a more realistic representation of the combined real and virtual environments in an AR application. The results of the case study show that the proposed method can detect at least 20 real-world objects with potential to be incorrectly occluded while processing and fixing occlusion errors at least 5 times per second.


Sign in / Sign up

Export Citation Format

Share Document