appearance editing
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 2)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Pulkit Gera ◽  
Aakash K T ◽  
Dhawal Sirikonda ◽  
P. J. Narayanan

Author(s):  
Yusuke Manabe ◽  
Midori Tanaka ◽  
Takahiko Horiuchi

With the proliferation of smartphones and social networking services, the opportunities for individuals to take photographs have increased exponentially. In a previous study, the perceived gloss of an object was reduced by its representing as a digital image and compared with a real object. It is also known that image editing, such as lossy image compression, can reduce the glossiness of an image. Therefore, the glossiness of real objects may be easily changed in digital images; thus, a method for appropriately editing the gloss in digital images is required for post-processing. In this study, we propose a gloss appearance editing method for various material objects in a single digital image. The proposed method consists of three steps: color space conversion, gloss detection, and gloss editing. The relationship between the proposed method and the respective reflection models of inhomogeneous objects, metallic objects, and translucent objects was analyzed. Consequently, we determined that the gloss editing of the proposed method is equivalent to editing the specular reflection component of an inhomogeneous object, the grazing reflection component of a metallic object, and the specular reflection component of a translucent object. We applied the proposed method to test images including objects of various materials and confirmed its effectiveness through a subjective evaluation by visual inspection and an objective evaluation using image statistics.


2018 ◽  
Vol 1 (1) ◽  
pp. 10502-1-10502-15 ◽  
Author(s):  
Shida Beigpour ◽  
Sumit Shekhar ◽  
Mohsen Mansouryar ◽  
Karol Myszkowski ◽  
Hans-Peter Seidel

Abstract The authors present a framework for image-based surface appearance editing for light-field data. Their framework improves over the state of the art without the need for a full “inverse rendering,” so that full geometrical data, or presence of highly specular or reflective surfaces are not required. It is robust to noisy or missing data, and handles many types of camera array setup ranging from a dense light field to a wide-baseline stereo-image pair. They start by extracting intrinsic layers from the light-field image set maintaining consistency between views. It is followed by decomposing each layer separately into frequency bands, and applying a wide range of “band-sifting” operations. The above approach enables a rich variety of perceptually plausible surface finishing and materials, achieving novel effects like translucency. Their GPU-based implementation allow interactive editing of an arbitrary light-field view, which can then be consistently propagated to the rest of the views. The authors provide extensive evaluation of our framework on various datasets and against state-of-the-art solutions.


Author(s):  
Apostolia Tsirikoglou ◽  
Joel Kronander ◽  
Per Larsson ◽  
Tanaboon Tongbuasirilai ◽  
Andrew Gardner ◽  
...  
Keyword(s):  

2012 ◽  
Vol 31 (2) ◽  
pp. 1-13 ◽  
Author(s):  
Daniel G. Aliaga ◽  
Yu Hong Yeung ◽  
Alvin Law ◽  
Behzad Sajadi ◽  
Aditi Majumder

2011 ◽  
Vol 30 (6) ◽  
pp. 1-10 ◽  
Author(s):  
Kun Xu ◽  
Li-Qian Ma ◽  
Bo Ren ◽  
Rui Wang ◽  
Shi-Min Hu
Keyword(s):  

2011 ◽  
Vol 26 (6) ◽  
pp. 1011-1016 ◽  
Author(s):  
Xiao-Hui Bie ◽  
Hao-Da Huang ◽  
Wen-Cheng Wang

2011 ◽  
Vol 30 (8) ◽  
pp. 2288-2300 ◽  
Author(s):  
Alvin J. Law ◽  
Daniel G. Aliaga ◽  
Behzad Sajadi ◽  
Aditi Majumder ◽  
Zygmunt Pizlo
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document