scholarly journals Low-Light Image Enhancement Based on Deep Symmetric Encoder–Decoder Convolutional Networks

Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 446 ◽  
Author(s):  
Qiming Li ◽  
Haishen Wu ◽  
Lu Xu ◽  
Likai Wang ◽  
Yueqi Lv ◽  
...  

A low-light image enhancement method based on a deep symmetric encoder–decoder convolutional network (LLED-Net) is proposed in the paper. In surveillance and tactical reconnaissance, collecting visual information from a dynamic environment and accurately processing that data is critical to making the right decisions and ensuring mission success. However, due to the cost and technical limitations of camera sensors, it is difficult to capture clear images or videos in low-light conditions. In this paper, a special encoder–decoder convolution network is designed to utilize multi-scale feature maps and join jump connections to avoid gradient disappearance. In order to preserve the image texture as much as possible, by using structural similarity (SSIM) loss to train the model on the data sets with different brightness level, the model can adaptively enhance low-light images in low-light environments. The results show that the proposed algorithm provides significant improvements in quantitative comparison with RED-Net and several other representative image enhancement algorithms.

2021 ◽  
Vol 12 ◽  
Author(s):  
Nandhini Abirami R. ◽  
Durai Raj Vincent P. M.

Image enhancement is considered to be one of the complex tasks in image processing. When the images are captured under dim light, the quality of the images degrades due to low visibility degenerating the vision-based algorithms’ performance that is built for very good quality images with better visibility. After the emergence of a deep neural network number of methods has been put forward to improve images captured under low light. But, the results shown by existing low-light enhancement methods are not satisfactory because of the lack of effective network structures. A low-light image enhancement technique (LIMET) with a fine-tuned conditional generative adversarial network is presented in this paper. The proposed approach employs two discriminators to acquire a semantic meaning that imposes the obtained results to be realistic and natural. Finally, the proposed approach is evaluated with benchmark datasets. The experimental results highlight that the presented approach attains state-of-the-performance when compared to existing methods. The models’ performance is assessed using Visual Information Fidelitysse, which assesses the generated image’s quality over the degraded input. VIF obtained for different datasets using the proposed approach are 0.709123 for LIME dataset, 0.849982 for DICM dataset, 0.619342 for MEF dataset.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 495 ◽  
Author(s):  
Sophy Ai ◽  
Jangwoo Kwon

Low-light image enhancement is one of the most challenging tasks in computer vision, and it is actively researched and used to solve various problems. Most of the time, image processing achieves significant performance under normal lighting conditions. However, under low-light conditions, an image turns out to be noisy and dark, which makes subsequent computer vision tasks difficult. To make buried details more visible, and reduce blur and noise in a low-light captured image, a low-light image enhancement task is necessary. A lot of research has been applied to many different techniques. However, most of these approaches require much effort or expensive equipment to perform low-light image enhancement. For example, the image has to be captured in a raw camera file in order to be processed, and the addressing method does not perform well under extreme low-light conditions. In this paper, we propose a new convolutional network, Attention U-net (the integration of an attention gate and a U-net network), which is able to work on common file types (.PNG, .JPEG, .JPG, etc.) with primary support from deep learning to solve the problem of surveillance camera security in smart city inducements without requiring the raw image file from the camera, and it can perform under the most extreme low-light conditions.


2021 ◽  
Author(s):  
Zhuqing Jiang ◽  
Haotian Li ◽  
Liangjie Liu ◽  
Aidong Men ◽  
Haiying Wang

Sign in / Sign up

Export Citation Format

Share Document