image warping
Recently Published Documents


TOTAL DOCUMENTS

249
(FIVE YEARS 6)

H-INDEX

22
(FIVE YEARS 0)

Author(s):  
Juan Zheng Wu ◽  
Rigoberto Juarez-Salazar ◽  
Victor Hugo Diaz-Ramirez

Author(s):  
Wenqing Chu ◽  
Wei-Chih Hung ◽  
Yi-Hsuan Tsai ◽  
Yu-Ting Chang ◽  
Yijun Li ◽  
...  

AbstractCaricature is an artistic drawing created to abstract or exaggerate facial features of a person. Rendering visually pleasing caricatures is a difficult task that requires professional skills, and thus it is of great interest to design a method to automatically generate such drawings. To deal with large shape changes, we propose an algorithm based on a semantic shape transform to produce diverse and plausible shape exaggerations. Specifically, we predict pixel-wise semantic correspondences and perform image warping on the input photo to achieve dense shape transformation. We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures. In addition, our model allows users to manipulate the shape via the semantic map. We demonstrate the effectiveness of our approach on a large photograph-caricature benchmark dataset with comparisons to the state-of-the-art methods.


2020 ◽  
Author(s):  
Hong Liu ◽  
Haichao Cao ◽  
Enmin Song ◽  
Guangzhi Ma ◽  
Xiangyang Xu ◽  
...  

2020 ◽  
Author(s):  
Stefan Zellmann

<div><div><div><p>We propose an image warping-based remote rendering technique for volumes that decouples the rendering and display phases. Our work builds on prior work that samples the volume on the client using ray casting and reconstructs a z-value based on some heuristic. The color and depth buffer are then sent to the client that reuses this depth image as a stand-in for subsequent frames by warping it according to the current camera position until new data was received from the server. We augment that method by implementing the client renderer using ray tracing. By representing the pixel contributions as spheres, this allows us to effectively vary their footprint based on the distance to the viewer, which we find to give better results than point-based rasterization when applied to volumetric data sets.</p></div></div></div>


2020 ◽  
Author(s):  
Stefan Zellmann

<div><div><div><p>We propose an image warping-based remote rendering technique for volumes that decouples the rendering and display phases. Our work builds on prior work that samples the volume on the client using ray casting and reconstructs a z-value based on some heuristic. The color and depth buffer are then sent to the client that reuses this depth image as a stand-in for subsequent frames by warping it according to the current camera position until new data was received from the server. We augment that method by implementing the client renderer using ray tracing. By representing the pixel contributions as spheres, this allows us to effectively vary their footprint based on the distance to the viewer, which we find to give better results than point-based rasterization when applied to volumetric data sets.</p></div></div></div>


Sign in / Sign up

Export Citation Format

Share Document