2013 ◽  
Vol 284-287 ◽  
pp. 3230-3234
Author(s):  
Thomas Schumann ◽  
Herbert Krauß ◽  
Yeong Kang Lai ◽  
Yu Fan Lai

With advances in technology, 3D video technology becomes possible and attractive. However, there are still many pre-recorded 2D videos/images which need to get transferred to 3D. Hence this paper presents a high quality view synthesis algorithm and architecture for 2D-to-3D video conversion. During the process of view synthesis, the monocular depth information together with the intermediate view is synthesized to the left-eye and right-eye view. The proposed view synthesis algorithm consists of two parts: 3D image warping and inpainting (hole filling). 3D image warping transforms a 2D camera image plane to a 3D coordinate plane. However the integer grid points of the reference are warped to irregularly spaced points in the virtual view, resulting in occlusion problems. Thus inpainting is needed to fix the virtual images. The proposed algorithm shows an improved PSNR gain of 0.2~1.5dB. We adopt hardware/software co-design to accomplish the proposed view synthesis algorithm. For this we implemented the image inpainting on a FPGA device and the remaining algorithm in software.


2014 ◽  
Vol 33 (8) ◽  
pp. 145-156 ◽  
Author(s):  
José M. Noguera ◽  
Antonio J. Rueda ◽  
Miguel A. Espada ◽  
Máximo Martín

2020 ◽  
Author(s):  
Stefan Zellmann

<div><div><div><p>We propose an image warping-based remote rendering technique for volumes that decouples the rendering and display phases. Our work builds on prior work that samples the volume on the client using ray casting and reconstructs a z-value based on some heuristic. The color and depth buffer are then sent to the client that reuses this depth image as a stand-in for subsequent frames by warping it according to the current camera position until new data was received from the server. We augment that method by implementing the client renderer using ray tracing. By representing the pixel contributions as spheres, this allows us to effectively vary their footprint based on the distance to the viewer, which we find to give better results than point-based rasterization when applied to volumetric data sets.</p></div></div></div>


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Moses Q. Wilks ◽  
Hillary Protas ◽  
Mirwais Wardak ◽  
Vladimir Kepe ◽  
Gary W. Small ◽  
...  

We evaluate an automated approach to the cortical surface mapping (CSM) method of VOI analysis in PET. Although CSM has been previously shown to be successful, the process can be long and tedious. Here, we present an approach that removes these difficulties through the use of 3D image warping to a common space. We test this automated method using studies of FDDNP PET in Alzheimer's disease and mild cognitive impairment. For each subject, VOIs were created, through CSM, to extract regional PET data. After warping to the common space, a single set of CSM-generated VOIs was used to extract PET data from all subjects. The data extracted using a single set of VOIs outperformed the manual approach in classifying AD patients from MCIs and controls. This suggests that this automated method can remove variance in measurements of PET data and can facilitate accurate, high-throughput image analysis.


Sign in / Sign up

Export Citation Format

Share Document