A Proposed Grayscale Face Image Colorization System using Particle Swarm Optimization

2017 ◽  
Vol 1 (1) ◽  
pp. 72-89 ◽  
Author(s):  
Abul Hasnat ◽  
Santanu Halder ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri

The proposed work is a novel grayscale face image colorization approach using a reference color face image. It takes a reference color image which presumably contains semantically similar color information for the query grayscale image and colorizes the grayscale face image with the help of the reference image. In this novel patch based colorization, the system searches a suitable patch on reference color image for each patch of grayscale image to colorize. Exhaustive patch search in reference color image takes much time resulting slow colorization process applicable for real time applications. So PSO is used to reduce the patch searching time for faster colorization process applicable in real time applications. The proposed method is successfully applied on 150 male and female face images of FRAV2D database. “Colorization Turing test” was conducted asking human subject to choose the image(close to the original color image) between colorized image using proposed algorithm and recent methods and in most of the cases colorized images using the proposed method got selected.

2018 ◽  
pp. 886-904
Author(s):  
Abul Hasnat ◽  
Santanu Halder ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri

The proposed work is a novel grayscale face image colorization approach using a reference color face image. It takes a reference color image which presumably contains semantically similar color information for the query grayscale image and colorizes the grayscale face image with the help of the reference image. In this novel patch based colorization, the system searches a suitable patch on reference color image for each patch of grayscale image to colorize. Exhaustive patch search in reference color image takes much time resulting slow colorization process applicable for real time applications. So PSO is used to reduce the patch searching time for faster colorization process applicable in real time applications. The proposed method is successfully applied on 150 male and female face images of FRAV2D database. “Colorization Turing test” was conducted asking human subject to choose the image (close to the original color image) between colorized image using proposed algorithm and recent methods and in most of the cases colorized images using the proposed method got selected.


Author(s):  
Abul Hasnat ◽  
Santanu Halder ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri

Colorization of grayscale image is a process to convert a grayscale image into a color one. Few research works reported in literature on this but there is hardly any generalized method that successfully colorizes all types of grayscale image. This study proposes a novel grayscale image colorization method using a reference color image. It takes the grayscale image and the type of the query image as input. First, it selects reference image from color image database using histogram index of the query image and histogram index of luminance channel of color images of respective type. Once the reference image is selected, four features are extracted for each pixel of the luminance channel of the reference image. These extracted features as input and chrominance blue(Cb) value as target value forms the training dataset for Cb channel. Similarly training dataset for chrominance red(Cr) channel is also formed. These extracted features of the reference image and associated chrominance values are used to train two artificial neural network(ANN)- one for Cb and one for Cr channel. Then, for each pixel of the of query image, same four features are extracted and used as input to the trained ANN to predict the chrominance values of the query image. Thus predicted chrominance values along with the original luminance values of the query image are used to construct the colorized image. The experiment has been conducted on images collected from different standard image database i.e. FRAV2D, UCID.v2 and images captured using standard digital camera etc. These images are initially converted into grayscale images and then the colorization method was applied. For performance evaluation, PSNR between the original color image and newly colorized image is calculated. PSNR shows that the proposed method better colorizes than the recently reported methods in the literature. Beside this, “Colorization Turing test” was conducted asking human subject to choose the image (closer to the original color image) among the colorized images using proposed algorithm and recently reported methods. In 80% of cases colorized images using the proposed method got selected.


2018 ◽  
Vol 8 (8) ◽  
pp. 1269 ◽  
Author(s):  
Dae Seo ◽  
Yong Kim ◽  
Yang Eo ◽  
Wan Park

Image colorization assigns colors to a grayscale image, which is an important yet difficult image-processing task encountered in various applications. In particular, grayscale aerial image colorization is a poorly posed problem that is affected by the sun elevation angle, seasons, sensor parameters, etc. Furthermore, since different colors may have the same intensity, it is difficult to solve this problem using traditional methods. This study proposes a novel method for the colorization of grayscale aerial images using random forest (RF) regression. The algorithm uses one grayscale image for input and one-color image for reference, both of which have similar seasonal features at the same location. The reference color image is then converted from the Red-Green-Blue (RGB) color space to the CIE L*a*b (Lab) color space in which the luminance is used to extract training pixels; this is done by performing change detection with the input grayscale image, and color information is used to establish color relationships. The proposed method directly establishes color relationships between features of the input grayscale image and color information of the reference color image based on the corresponding training pixels. The experimental results show that the proposed method outperforms several state-of-the-art algorithms in terms of both visual inspection and quantitative evaluation.


2011 ◽  
Vol 11 (02) ◽  
pp. 195-206 ◽  
Author(s):  
YUQING WANG ◽  
MING ZHU ◽  
HAOCHEN PANG ◽  
YONG WANG

A quaternion model for describing color image is proposed in order to evaluate its quality. Local variance distribution of luminance layer is calculated. Color information is taken into account by using quaternion matrix. The description method is a combination of luminance layer and color information. The angle between the singular value feature vectors of the quaternion matrices corresponding to the reference image and the distorted image is used to measure the structural similarity of the two color images. When the reference image and distorted images are of unequal size it can also assess their quality. Results from experiments show that the proposed method is better consistent with the human visual characteristics than MSE, PSNR and MSSIM. The resized distorted images can also be assessed rationally by this method.


Author(s):  
Yong Du ◽  
Yangyang Xu ◽  
Taizhong Ye ◽  
Qiang Wen ◽  
Chufeng Xiao ◽  
...  

Color dimensionality reduction is believed as a non-invertible process, as re-colorization results in perceptually noticeable and unrecoverable distortion. In this article, we propose to convert a color image into a grayscale image that can fully recover its original colors, and more importantly, the encoded information is discriminative and sparse, which saves storage capacity. Particularly, we design an invertible deep neural network for color encoding and decoding purposes. This network learns to generate a residual image that encodes color information, and it is then combined with a base grayscale image for color recovering. In this way, the non-differentiable compression process (e.g., JPEG) of the base grayscale image can be integrated into the network in an end-to-end manner. To further reduce the size of the residual image, we present a specific layer to enhance Sparsity Enforcing Priors (SEP), thus leading to negligible storage space. The proposed method allows color embedding on a sparse residual image while keeping a high, 35dB PSNR on average. Extensive experiments demonstrate that the proposed method outperforms state-of-the-arts in terms of image quality and tolerability to compression.


Author(s):  
Zhikun Huang ◽  
Zhedong Zheng ◽  
Chenggang Yan ◽  
Hongtao Xie ◽  
Yaoqi Sun ◽  
...  

This paper focuses on the real-world automatic makeup problem. Given one non-makeup target image and one reference image, the automatic makeup is to generate one face image, which maintains the original identity with the makeup style in the reference image. In the real-world scenario, face makeup task demands a robust system against the environmental variants. The two main challenges in real-world face makeup could be summarized as follow: first, the background in real-world images is complicated. The previous methods are prone to change the style of background as well; second, the foreground faces are also easy to be affected. For instance, the ``heavy'' makeup may lose the discriminative information of the original identity. To address these two challenges, we introduce a new makeup model, called Identity Preservation Makeup Net (IPM-Net), which preserves not only the background but the critical patterns of the original identity. Specifically, we disentangle the face images to two different information codes, i.e., identity content code and makeup style code. When inference, we only need to change the makeup style code to generate various makeup images of the target person. In the experiment, we show the proposed method achieves not only better accuracy in both realism (FID) and diversity (LPIPS) in the test set, but also works well on the real-world images collected from the Internet.


2017 ◽  
Vol 1 (1) ◽  
pp. 16 ◽  
Author(s):  
Jacky Efendi ◽  
Muhammad Ihsan Zul ◽  
Wawan Yunanto

Authentication is the process of verifying one’s identity, and one of its implementation is in taking attendances in university’s lectures. Attendance taking is a very important matter to every academic institution as a way to examine students’ performance. Signature based attendance taking can be manipulated. Therefore it has problems in verifying the attendance validity. In this final project, a real time eigenface based face recognition is implemented in an application to do attendance taking. The input face image is captured using a webcam. The application itself is built in C#, utilizing EmguCV library. The application is developed using Visual Studio 2015. Face detection is done with Viola-Jones algorithm. The eigenface method is used to do facial recognition on the detected face image. In this final project, a total of 8 testings are done in different conditions. From the testings, it is found that this application can recognize face images with accuracy as high as 90% and as low as 6.67%. This solution can be used as an alternative for real-time attendance taking in an environment with 170 lux light intensity, webcam resolution of 320 x 240 pixel, and the subject standing 1 meter away while not wearing spectacles. The average recognition time is 0.18125 ms.


Sign in / Sign up

Export Citation Format

Share Document