normal map
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 5)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Vol 11 (19) ◽  
pp. 9065
Author(s):  
Myungjin Choi ◽  
Jee-Hyeok Park ◽  
Qimeng Zhang ◽  
Byeung-Sun Hong ◽  
Chang-Hun Kim

We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution.


Author(s):  
Xavier Chermain ◽  
Simon Lucas ◽  
Basile Sauvage ◽  
Jean-Michel Dischler ◽  
Carsten Dachsbacher

Real-time geometric specular anti-aliasing is required when using a low number of pixel samples and high-frequency specular lobes. Several methods have been proposed for mono-lobe bidirectional reflection distribution functions (BRDFs), but none for multi-lobe BRDFs, e.g., a glinty BRDF. We present the first method for real-time geometric glint anti-aliasing (GGAA). It eliminates most of the inconsistent appearing and disappearing of glints on surfaces with significant curvatures during animations. The technique uses the glinty BRDF of Chermain et al. [2020] and leverages hardware GPU-filtering of textures to filter slope distributions on the fly. We also improve this glinty BRDF by adding a correlation factor of slope. This BRDF parameter allows convergence to normal distribution functions that are not aligned on the surface's axes. Above all, this parameter makes glint rendering compatible with normal map filtering using LEAN mapping. Using GGAA increases the rendering time from 0.6 % to 4.2 % and it requires 1/3 more memory due to MIP mapping of tabulated slope distributions. The results are compared with references using a thousand samples per pixel.


Author(s):  
Yi HE ◽  
Haoran Xie ◽  
Chao Zhang ◽  
Xi Yang ◽  
Kazunori Miyata
Keyword(s):  

2021 ◽  
Author(s):  
Boyao Li ◽  
Weiran Li ◽  
Qing Zhu

2020 ◽  
Author(s):  
Charles Preppernau
Keyword(s):  

Author(s):  
Yakun Ju ◽  
Kin-Man Lam ◽  
Yang Chen ◽  
Lin Qi ◽  
Junyu Dong

We present an attention-weighted loss in a photometric stereo neural network to improve 3D surface recovery accuracy in complex-structured areas, such as edges and crinkles, where existing learning-based methods often failed. Instead of using a uniform penalty for all pixels, our method employs the attention-weighted loss learned in a self-supervise manner for each pixel, avoiding blurry reconstruction result in such difficult regions. The network first estimates a surface normal map and an adaptive attention map, and then the latter is used to calculate a pixel-wise attention-weighted loss that focuses on complex regions. In these regions, the attention-weighted loss applies higher weights of the detail-preserving gradient loss to produce clear surface reconstructions. Experiments on real datasets show that our approach significantly outperforms traditional photometric stereo algorithms and state-of-the-art learning-based methods.


Author(s):  
Wanchao Su ◽  
Dong Du ◽  
Xin Yang ◽  
Shizhe Zhou ◽  
Hongbo Fu

Sign in / Sign up

Export Citation Format

Share Document