A comparison of computational color constancy Algorithms. II. Experiments with image data

2002 ◽  
Vol 11 (9) ◽  
pp. 985-996 ◽  
Author(s):  
K. Barnard ◽  
L. Martin ◽  
A. Coath ◽  
B. Funt
2020 ◽  
Vol 10 (14) ◽  
pp. 4806 ◽  
Author(s):  
Ho-Hyoung Choi ◽  
Hyun-Soo Kang ◽  
Byoung-Ju Yun

For more than a decade, both academia and industry have focused attention on the computer vision and in particular the computational color constancy (CVCC). The CVCC is used as a fundamental preprocessing task in a wide range of computer vision applications. While our human visual system (HVS) has the innate ability to perceive constant surface colors of objects under varying illumination spectra, the computer vision is facing the color constancy challenge in nature. Accordingly, this article proposes novel convolutional neural network (CNN) architecture based on the residual neural network which consists of pre-activation, atrous or dilated convolution and batch normalization. The proposed network can automatically decide what to learn from input image data and how to pool without supervision. When receiving input image data, the proposed network crops each image into image patches prior to training. Once the network begins learning, local semantic information is automatically extracted from the image patches and fed to its novel pooling layer. As a result of the semantic pooling, a weighted map or a mask is generated. Simultaneously, the extracted information is estimated and combined to form global information during training. The use of the novel pooling layer enables the proposed network to distinguish between useful data and noisy data, and thus efficiently remove noisy data during learning and evaluating. The main contribution of the proposed network is taking CVCC to higher accuracy and efficiency by adopting the novel pooling method. The experimental results demonstrate that the proposed network outperforms its conventional counterparts in estimation accuracy.


2020 ◽  
Vol 2020 (10) ◽  
pp. 135-1-135-6
Author(s):  
Jaeduk Han ◽  
Soonyoung Hong ◽  
Moon Gi Kang

Without sunlight, imaging devices typically depend on various artificial light sources. However, scenes captured with the artificial light sources often violate the assumptions employed in color constancy algorithms. These violations of the scenes, such as non-uniformity or multiple light sources, could disturb the computer vision algorithms. In this paper, complex illumination of multiple artificial light sources is decomposed into each illumination by considering the sensor responses and the spectrum of the artificial light sources, and the fundamental color constancy algorithms (e.g., gray-world, gray-edge, etc.) are improved by employing the estimated illumination energy. The proposed method effectively improves the conventional methods, and the results of the proposed algorithms are demonstrated using the images captured under laboratory settings for measuring the accuracy of the color representation.


2011 ◽  
Vol 20 (9) ◽  
pp. 2475-2489 ◽  
Author(s):  
A. Gijsenij ◽  
T. Gevers ◽  
J. van de Weijer

Sign in / Sign up

Export Citation Format

Share Document