Structure-Preserving Stereoscopic View Synthesis With Multi-Scale Adversarial Correlation Matching

Author(s):  
Yu Zhang ◽  
Dongqing Zou ◽  
Jimmy S. Ren ◽  
Zhe Jiang ◽  
Xiaohao Chen
Author(s):  
Gaurav Chaurasia ◽  
Arthur Nieuwoudt ◽  
Alexandru-Eugen Ichim ◽  
Richard Szeliski ◽  
Alexander Sorkine-Hornung

We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, as well as algorithm and performance trade-offs in terms of compute and resulting passthrough quality.;AB@The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing and rendering on heavily resource constrained hardware.


2017 ◽  
Vol 46 (5) ◽  
pp. 510003
Author(s):  
侯榜焕 HOU Bang-huan ◽  
张耿 ZHANG Geng ◽  
王飞 WANG Fei ◽  
于为中 YU Wei-zhong ◽  
姚敏立 YAO Min-li ◽  
...  

Sensors ◽  
2015 ◽  
Vol 15 (3) ◽  
pp. 5747-5762 ◽  
Author(s):  
Jinbeum Jang ◽  
Yoonjong Yoo ◽  
Jongheon Kim ◽  
Joonki Paik

2020 ◽  
Author(s):  
Jiajia Ni ◽  
Jianhuang Wu ◽  
Jing Tong ◽  
Mingqiang Wei ◽  
Zhengming Chen

Abstract Background: Vessel segmentation is a fundamental, yet not well-solved problem in medical image analysis, due to complicated geometrical and topological structures of human vessels. Unlike existing rule- and conventional learning-based techniques, which hardly capture the location of tiny vessel structures and perceive their global spatial structures, Methods: we propose Simultaneous Self- and Channel-attention Neural Network (termed SSCA-Net) to solve the multi-scale structure-preserving vessel segmentation (MSVS) problem. SSCA-Net differs from the conventional neural networks in modeling image global contexts, showing more power to understand the global semantic information by both self- and channel-attention (SCA) mechanism, and offering high performance on segmenting vessels with multi-scale structures. Specifically, the SCA module is designed and embedded in the feature decoding stage to learn SCA features at different layers, which the self-attention is used to obtain the position information of the feature itself, and the channel attention is designed to guide the shallow features to obtain global feature information. Results: Three blood vessel data sets are train and validate the models. our SSCA-Net achieves 96.21% in Dic and 92.70% in Mean IoU on the intracranial vessel dataset and achieved 98.20 %, 83.52% and 96.14% in AUC, Sen and Acc respectively on retinal vessel dataset. The obtain model can segment the leg arteries and Dic score is 97.21% and the Mean IoU score is 94.42%. Conclusions: The results demonstrated that the proposed SSCA-Net clear improvements of our method over the state-of-the-arts in terms of preserving vessel details and global spatial structures.


Sign in / Sign up

Export Citation Format

Share Document