Handling occlusions in video-based augmented reality using depth information

2009 ◽  
Vol 21 (5) ◽  
pp. 509-521 ◽  
Author(s):  
Jiejie Zhu ◽  
Zhigeng Pan ◽  
Chao Sun ◽  
Wenzhi Chen
10.29007/72d4 ◽  
2018 ◽  
Author(s):  
He Liu ◽  
Edouard Auvinet ◽  
Joshua Giles ◽  
Ferdinando Rodriguez Y Baena

Computer Aided Surgery (CAS) is helpful, but it clutters an already overcrowded operating theatre, and tends to disrupt the workflow of conventional surgery. In order to provide seamless computer assistance with improved immersion and a more natural surgical workflow, we propose an augmented-reality based navigation system for CAS. Here, we choose to focus on the proximal femoral anatomy, which we register to a plan by processing depth information of the surgical site captured by a commercial depth camera. Intra-operative three-dimensional surgical guidance is then provided to the surgeon through a commercial augmented reality headset, to drill a pilot hole in the femoral head, so that the user can perform the operation without additional physical guides. The user can interact intuitively with the system by simple gestures and voice commands, resulting in a more natural workflow. To assess the surgical accuracy of the proposed setup, 30 experiments of pilot hole drilling were performed on femur phantoms. The position and the orientation of the drilled guide holes were measured and compared with the preoperative plan, and the mean errors were within 2mm and 2°, results which are in line with commercial computer assisted orthopedic systems today.


Author(s):  
Yue Wang ◽  
Shusheng Zhang ◽  
Xiaoliang Bai

To improve the robustness and applicability of 3D tracking and registration for augmented reality(AR) aided mechanical assembly system, a 3D registration and tracking method based on the point cloud and visual features is proposed. Firstly, the reference model point cloud is used to definite absolute tracking coordinate system, thus the locating datum of the virtual assembly guidance information is determined. Then by adding visual features matching to the iterative closest points (ICP) registration process, the robustness of tracking and registration is improved. In order to obtain sufficient number of visual feature matching points in this process, a visual feature matching strategy based on orientation vector consistency is proposed. Finally, the loop closure detection and global pose optimization from key frames are added in the tracking registration process. The experimental result shows that the proposed method has good real-time performance and accuracy, and the running speed can reach 30 frames per second. Moreover, it also shows good robustness when the camera is moving fast and the depth information is inaccurate, and the comprehensive performance of the proposed method is better than the KinectFusion method.


2021 ◽  
Vol 9 ◽  
Author(s):  
Yunpeng Liu ◽  
Xingpeng Yan ◽  
Xinlei Liu ◽  
Xi Wang ◽  
Tao Jing ◽  
...  

In this paper, an optical field coding method for the fusion of real and virtual scenes is proposed to implement an augmented reality (AR)-based holographic stereogram. The occlusion relationship between the real and virtual scenes is analyzed, and a fusion strategy based on instance segmentation and depth determination is proposed. A real three-dimensional (3D) scene sampling system is built, and the foreground contour of the sampled perspective image is extracted by the Mask R-CNN instance segmentation algorithm. The virtual 3D scene is rendered by a computer to obtain the virtual sampled images as well as their depth maps. According to the occlusion relation of the fusion scenes, the pseudo-depth map of the real scene is derived, and the fusion coding of 3D real and virtual scenes information is implemented by the depth information comparison. The optical experiment indicates that AR-based holographic stereogram fabricated by our coding method can reconstruct real and virtual fused 3D scenes with correct occlusion and depth cues on full parallax.


Author(s):  
Anna L. Roethe ◽  
Judith Rösler ◽  
Martin Misch ◽  
Peter Vajkoczy ◽  
Thomas Picht

Abstract Background Augmented reality (AR) has the potential to support complex neurosurgical interventions by including visual information seamlessly. This study examines intraoperative visualization parameters and clinical impact of AR in brain tumor surgery. Methods Fifty-five intracranial lesions, operated either with AR-navigated microscope (n = 39) or conventional neuronavigation (n = 16) after randomization, have been included prospectively. Surgical resection time, duration/type/mode of AR, displayed objects (n, type), pointer-based navigation checks (n), usability of control, quality indicators, and overall surgical usefulness of AR have been assessed. Results AR display has been used in 44.4% of resection time. Predominant AR type was navigation view (75.7%), followed by target volumes (20.1%). Predominant AR mode was picture-in-picture (PiP) (72.5%), followed by 23.3% overlay display. In 43.6% of cases, vision of important anatomical structures has been partially or entirely blocked by AR information. A total of 7.7% of cases used MRI navigation only, 30.8% used one, 23.1% used two, and 38.5% used three or more object segmentations in AR navigation. A total of 66.7% of surgeons found AR visualization helpful in the individual surgical case. AR depth information and accuracy have been rated acceptable (median 3.0 vs. median 5.0 in conventional neuronavigation). The mean utilization of the navigation pointer was 2.6 × /resection hour (AR) vs. 9.7 × /resection hour (neuronavigation); navigation effort was significantly reduced in AR (P < 0.001). Conclusions The main benefit of HUD-based AR visualization in brain tumor surgery is the integrated continuous display allowing for pointer-less navigation. Navigation view (PiP) provides the highest usability while blocking the operative field less frequently. Visualization quality will benefit from improvements in registration accuracy and depth impression. German clinical trials registration number. DRKS00016955.


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Hung-Chun Lin ◽  
Yung-Hsun Wu

AbstractAugmented reality (AR) provided extra information to the user by applying virtual image onto the real environment. There are many methods achieving AR. Holographic display is one of the potential ways due to its perfect 3D demonstration. Holographic display can provide the virtual 3D object with depth information. It can be realized an AR device with real 3D scene by combing holographic display. However, it is difficult to realize a compact holographic display with wide viewing angle and enough resolution. It limits holographic display to apply to AR. In this paper, we will discuss the requirements of holographic display based on the development of LCD, including resolution (ppi), viewing angle, image quality and backlight. We wish this article can provide preliminary direction for the LCD industry to develop AR technology using holographic display.


2020 ◽  
Vol 8 (5) ◽  
pp. 4149-4155

Recently, augmented Reality (AR) is growing rapidly and much attention has been focused on interaction techniques between users and virtual objects, such as the user directly manipulating virtual objects with his/her bare hands. Therefore, the authors believe that more accurate overlay techniques will be required to interact more seamlessly. On the other hand, in AR technology, since the 3-dimensional (3D) model is superimposed on the image of the real space afterwards, it is always displayed on the front side than the hand. Thus, it becomes an unnatural scene in some cases (occlusion problem). In this study, this system considers the object-context relations between the user's hand and the virtual object by acquiring depth information of the user's finger using a depth sensor. In addition, the system defines the color range of the user's hand by performing principal component analysis (PCA) on the color information near the finger position obtained from the depth sensor and setting a threshold. Then, this system extracts an area of the hand by using the definition of the color range of the user's hand. Furthermore, the fingers are distinguished by using the Canny method. In this way, this system realizes hidden surface removal along the area of the user's hand. In the evaluation experiment, it is confirmed that the hidden surface removal in this study make it possible to distinguish between finger boundaries and to clarify and process finger contours.


2021 ◽  
Author(s):  
Markus Miller ◽  
Alfred Nischwitz ◽  
Rüdiger Westermann

In augmented reality applications, consistent illumination between virtual and real objects is important for creating an immersive user experience. Consistent illumination can be achieved by appropriate parameterisation of the virtual illumination model, that is consistent with real-world lighting conditions. In this study, we developed a method to reconstruct the general light direction from red-green-blue (RGB) images of real-world scenes using a modified VGG-16 neural network. We reconstructed the general light direction as azimuth and elevation angles. To avoid inaccurate results caused by coordinate uncertainty occurring at steep elevation angles, we further introduced stereographically projected coordinates. Unlike recent deep-learning-based approaches for reconstructing the light source direction, our approach does not require depth information and thus does not rely on special red-green-blue- depth (RGB-D) images as input.


ASHA Leader ◽  
2013 ◽  
Vol 18 (9) ◽  
pp. 14-14 ◽  
Keyword(s):  

Amp Up Your Treatment With Augmented Reality


Sign in / Sign up

Export Citation Format

Share Document