scholarly journals A Smart System for Low-Light Image Enhancement with Color Constancy and Detail Manipulation in Complex Light Environments

Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 718 ◽  
Author(s):  
Ziaur Rahman ◽  
Muhammad Aamir ◽  
Yi-Fei Pu ◽  
Farhan Ullah ◽  
Qiang Dai

Images are an important medium to represent meaningful information. It may be difficult for computer vision techniques and humans to extract valuable information from images with low illumination. Currently, the enhancement of low-quality images is a challenging task in the domain of image processing and computer graphics. Although there are many algorithms for image enhancement, the existing techniques often produce defective results with respect to the portions of the image with intense or normal illumination, and such techniques also inevitably degrade certain visual artifacts of the image. The model use for image enhancement must perform the following tasks: preserving details, improving contrast, color correction, and noise suppression. In this paper, we have proposed a framework based on a camera response and weighted least squares strategies. First, the image exposure is adjusted using brightness transformation to obtain the correct model for the camera response, and an illumination estimation approach is used to extract a ratio map. Then, the proposed model adjusts every pixel according to the calculated exposure map and Retinex theory. Additionally, a dehazing algorithm is used to remove haze and improve the contrast of the image. The color constancy parameters set the true color for images of low to average quality. Finally, a details enhancement approach preserves the naturalness and extracts more details to enhance the visual quality of the image. The experimental evidence and a comparison with several, recent state-of-the-art algorithms demonstrated that our designed framework is effective and can efficiently enhance low-light images.

Author(s):  
Guangtao Zhai ◽  
Wei Sun ◽  
Xiongkuo Min ◽  
Jiantao Zhou

Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 574 ◽  
Author(s):  
Qiang Dai ◽  
Yi-Fei Pu ◽  
Ziaur Rahman ◽  
Muhammad Aamir

In this paper, a novel fractional-order fusion model (FFM) is presented for low-light image enhancement. Existing image enhancement methods don’t adequately extract contents from low-light areas, suppress noise, and preserve naturalness. To solve these problems, the main contributions of this paper are using fractional-order mask and the fusion framework to enhance the low-light image. Firstly, the fractional mask is utilized to extract illumination from the input image. Secondly, image exposure adjusts to visible the dark regions. Finally, the fusion approach adopts the extracting of more hidden contents from dim areas. Depending on the experimental results, the fractional-order differential is much better for preserving the visual appearance as compared to traditional integer-order methods. The FFM works well for images having complex or normal low-light conditions. It also shows a trade-off among contrast improvement, detail enhancement, and preservation of the natural feel of the image. Experimental results reveal that the proposed model achieves promising results, and extracts more invisible contents in dark areas. The qualitative and quantitative comparison of several recent and advance state-of-the-art algorithms shows that the proposed model is robust and efficient.


2020 ◽  
Vol 37 (5) ◽  
pp. 733-743
Author(s):  
Mohammad Abid Al-Hashim ◽  
Zohair Al-Ameen

These days, digital images are one of the most profound methods used to represent information. Still, various images are obtained with a low-light effect due to numerous unavoidable reasons. It may be problematic for humans and computer-related applications to perceive and extract valuable information from such images properly. Hence, the observed quality of low-light images should be ameliorated for improved analysis, understanding, and interpretation. Currently, the enhancement of low-light images is a challenging task since various factors, including brightness, contrast, and colors should be considered effectively to produce results with adequate quality. Therefore, a retinex-based multiphase algorithm is developed in this study, in that it computes the illumination image somewhat similar to the single-scale retinex algorithm, takes the logs of both the original and the illumination images, subtract them using a modified approach, the result is then processed by a gamma-corrected sigmoid function and further processed by a normalization function to produce to the final result. The proposed algorithm is tested using natural low-light images, evaluated using specialized metrics, and compared with eight different sophisticated methods. The attained experiential outcomes revealed that the proposed algorithm has delivered the best performances concerning processing speed, perceived quality, and evaluation metrics.


Author(s):  
Dr. Anil Singh Parihar ◽  
Kavinder Singh

<div>In this paper, we proposed a new low-light image enhancement</div><div>approach to overcome the above limitations. The</div><div>proposed algorithm is named as Nature Preserving Lowlight</div><div>Image Enhancement (NPLIE). NPLIE estimates initial</div><div>illumination and performs optimal refinement. The proposed</div><div>algorithm computes the reflectance component through an</div><div>element-wise division of input image by illumination. The enhanced image is obtained as a product of adjusted illumination and reflectance component. In this work, we estimate initial</div><div>illuminance from structure-aware smoothening of a low-light</div><div>image using guided filters of variable box sizes. We compute</div><div>refined illumination by solving the proposed multi-objective</div>


Author(s):  
Audrey G. Chung ◽  
Alexander Wong

Very low-light conditions are problematic for current robotic visionalgorithms as captured images are subject to high levels of ISOnoise. We propose a Bayesian Residual Transform (BRT) model forjoint noise suppression and image enhancement for images capturedunder these low-light conditions via a Bayesian-based multiscaleimage decomposition. The BRT models a given image as thesum of residual images, and the denoised image is reconstructedusing a weighted summation of these residual images. We evaluatethe efficacy of the proposed BRT model using the VIP-LowLightdataset, and preliminary results show a notable visual improvementover state-of-the-art denoising methods.


Image noise refers to the specks of false colors or artifacts that diminish the visual quality of the captured image. It has become our daily experience that with affordable smart-phone cameras we can capture high clarity photos in a brightly illuminated scene. But using the same camera in a poorly lit environment with high ISO settings results in images that are noisy with irrelevant specks of colors. Noise removal and contrast enhancement in images have been extensively studied by researchers over the past few decades. But most of these techniques fail to perform satisfactorily if the images are captured in an extremely dark environment. In recent years, computer vision researchers have started developing neural network-based algorithms to perform automated de-noising of images captured in a low-light environment. Although these methods are reasonably successful in providing the desired de-noised image, the transformation operation tends to distort the structure of the image contents to a certain extent. We propose an improved algorithm for image enhancement and de-noising using the camera’s raw image data by employing a deep U-Net generator. The network is trained in an end-to-end manner on a large training set with suitable loss functions. To preserve the image content structures at a higher resolution compared to the existing approaches, we make use of an edge loss term in addition to PSNR loss and structural similarity loss during the training phase. Qualitative and quantitative results in terms of PSNR and SSIM values emphasize the effectiveness of our approach.


Author(s):  
Xiaomei Feng ◽  
Jinjiang Li ◽  
Zhen Hua ◽  
Fan Zhang

2021 ◽  
Vol 12 ◽  
Author(s):  
Nandhini Abirami R. ◽  
Durai Raj Vincent P. M.

Image enhancement is considered to be one of the complex tasks in image processing. When the images are captured under dim light, the quality of the images degrades due to low visibility degenerating the vision-based algorithms’ performance that is built for very good quality images with better visibility. After the emergence of a deep neural network number of methods has been put forward to improve images captured under low light. But, the results shown by existing low-light enhancement methods are not satisfactory because of the lack of effective network structures. A low-light image enhancement technique (LIMET) with a fine-tuned conditional generative adversarial network is presented in this paper. The proposed approach employs two discriminators to acquire a semantic meaning that imposes the obtained results to be realistic and natural. Finally, the proposed approach is evaluated with benchmark datasets. The experimental results highlight that the presented approach attains state-of-the-performance when compared to existing methods. The models’ performance is assessed using Visual Information Fidelitysse, which assesses the generated image’s quality over the degraded input. VIF obtained for different datasets using the proposed approach are 0.709123 for LIME dataset, 0.849982 for DICM dataset, 0.619342 for MEF dataset.


Sign in / Sign up

Export Citation Format

Share Document