scholarly journals Low-Light Image Enhancement Based on Multi-Path Interaction

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4986
Author(s):  
Bai Zhao ◽  
Xiaolin Gong ◽  
Jian Wang ◽  
Lingchao Zhao

Due to the non-uniform illumination conditions, images captured by sensors often suffer from uneven brightness, low contrast and noise. In order to improve the quality of the image, in this paper, a multi-path interaction network is proposed to enhance the R, G, B channels, and then the three channels are combined into the color image and further adjusted in detail. In the multi-path interaction network, the feature maps in several encoding–decoding subnetworks are used to exchange information across paths, while a high-resolution path is retained to enrich the feature representation. Meanwhile, in order to avoid the possible unnatural results caused by the separation of the R, G, B channels, the output of the multi-path interaction network is corrected in detail to obtain the final enhancement results. Experimental results show that the proposed method can effectively improve the visual quality of low-light images, and the performance is better than the state-of-the-art methods.

2020 ◽  
pp. 1-16
Author(s):  
Meriem Khelifa ◽  
Dalila Boughaci ◽  
Esma Aïmeur

The Traveling Tournament Problem (TTP) is concerned with finding a double round-robin tournament schedule that minimizes the total distances traveled by the teams. It has attracted significant interest recently since a favorable TTP schedule can result in significant savings for the league. This paper proposes an original evolutionary algorithm for TTP. We first propose a quick and effective constructive algorithm to construct a Double Round Robin Tournament (DRRT) schedule with low travel cost. We then describe an enhanced genetic algorithm with a new crossover operator to improve the travel cost of the generated schedules. A new heuristic for ordering efficiently the scheduled rounds is also proposed. The latter leads to significant enhancement in the quality of the schedules. The overall method is evaluated on publicly available standard benchmarks and compared with other techniques for TTP and UTTP (Unconstrained Traveling Tournament Problem). The computational experiment shows that the proposed approach could build very good solutions comparable to other state-of-the-art approaches or better than the current best solutions on UTTP. Further, our method provides new valuable solutions to some unsolved UTTP instances and outperforms prior methods for all US National League (NL) instances.


Author(s):  
Guangtao Zhai ◽  
Wei Sun ◽  
Xiongkuo Min ◽  
Jiantao Zhou

Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.


2020 ◽  
Vol 34 (07) ◽  
pp. 11173-11180 ◽  
Author(s):  
Xin Jin ◽  
Cuiling Lan ◽  
Wenjun Zeng ◽  
Guoqiang Wei ◽  
Zhibo Chen

Person re-identification (reID) aims to match person images to retrieve the ones with the same identity. This is a challenging task, as the images to be matched are generally semantically misaligned due to the diversity of human poses and capture viewpoints, incompleteness of the visible bodies (due to occlusion), etc. In this paper, we propose a framework that drives the reID network to learn semantics-aligned feature representation through delicate supervision designs. Specifically, we build a Semantics Aligning Network (SAN) which consists of a base network as encoder (SA-Enc) for re-ID, and a decoder (SA-Dec) for reconstructing/regressing the densely semantics aligned full texture image. We jointly train the SAN under the supervisions of person re-identification and aligned texture generation. Moreover, at the decoder, besides the reconstruction loss, we add Triplet ReID constraints over the feature maps as the perceptual losses. The decoder is discarded in the inference and thus our scheme is computationally efficient. Ablation studies demonstrate the effectiveness of our design. We achieve the state-of-the-art performances on the benchmark datasets CUHK03, Market1501, MSMT17, and the partial person reID dataset Partial REID.


IJOSTHE ◽  
2020 ◽  
Vol 7 (1) ◽  
pp. 8
Author(s):  
Puspad Kumar Sharma ◽  
Nitesh Gupta ◽  
Anurag Shrivastava

Due to camera resolution or any lighting condition, captured image are generally over-exposed or under-exposed conditions. So, there is need of some enhancement techniques that improvise these artifacts from recorded pictures or images. So, the objective of image enhancement and adjustment techniques is to improve the quality and characteristics of an image. In general terms, the enhancement of image distorts the original numerical values of an image. Therefore, it is required to design such enhancement technique that do not compromise with the quality of the image. The optimization of the image extracts the characteristics of the image instead of restoring the degraded image. The improvement of the image involves the degraded image processing and the improvement of its visual aspect. A lot of research has been done to improve the image. Many research works have been done in this field. One among them is deep learning. Most of the existing contrast enhancement methods, adjust the tone curve to correct the contrast of an input image but doesn’t work efficiently due to limited amount of information contained in a single image. In this research, the CNN with edge adjustment is proposed. By applying CNN with Edge adjustment technique, the input low contrast images are capable to adapt according to high quality enhancement. The result analysis shows that the developed technique significantly advantages over existing methods.


2021 ◽  
Vol 18 (4) ◽  
pp. 1221-1226
Author(s):  
Durai Pandurangan ◽  
R. Saravana Kumar ◽  
Lukas Gebremariam ◽  
L. Arulmurugan ◽  
S. Tamilselvan

Insufficient and poor lightning conditions affect the quality of videos and images captured by the camcorders. The low quality images decrease the performances of computer vision systems in smart traffic, video surveillance, and other imaging systems applications. In this paper, combined gray level transformation technique is proposed to enhance the less quality of illuminated images. This technique is composed of log transformation, power law transformation and adaptive histogram equalization process to improve the low light illumination image estimated using HIS color model. Finally, the enhanced illumination image is blended with original reflectance image to get enhanced color image. This paper shows that the proposed algorithm on various weakly illuminated images is enhanced better and has taken reduced computation time than previous image processing techniques.


Author(s):  
Ojas A. Ramwala ◽  
Smeet A. Dhakecha ◽  
Chirag N. Paunwala ◽  
Mita C. Paunwala

Documents are an essential source of valuable information and knowledge, and photographs are a great way of reminiscing old memories and past events. However, it becomes difficult to preserve the quality of such ancient documents and old photographs for an extremely long time, as these images usually get damaged or creased due to various extrinsic effects. Utilizing image editing software like Photoshop to manually reconstruct such old photographs and documents is a strenuous and an enduring process. This paper attempts to leverage the generative modeling capabilities of Conditional Generative Adversarial Networks by utilizing specialized architectures for the Generator and the Discriminator. The proposed Reminiscent Net has a U-Net-based Generator with numerous feature maps for complete information transfer with the incorporation of location and contextual details, and the absence of dense layers allows utilization of diverse sized images. Implementation of the PatchGAN-based Discriminator that penalizes the image at the scale of patches has been proposed. NADAM optimizer has been implemented to enable faster and better convergence of the loss function. The proposed method produces visually appealing de-creased images, and experiments indicate that the architecture performs better than various novel approaches, both qualitatively and quantitatively.


2020 ◽  
Vol 10 (7) ◽  
pp. 2601 ◽  
Author(s):  
Indriani P. Astono ◽  
James S. Welsh ◽  
Stephan Chalup ◽  
Peter Greer

In this paper, we develop an optimised state-of-the-art 2D U-Net model by studying the effects of the individual deep learning model components in performing prostate segmentation. We found that for upsampling, the combination of interpolation and convolution is better than the use of transposed convolution. For combining feature maps in each convolution block, it is only beneficial if a skip connection with concatenation is used. With respect to pooling, average pooling is better than strided-convolution, max, RMS or L2 pooling. Introducing a batch normalisation layer before the activation layer gives further performance improvement. The optimisation is based on a private dataset as it has a fixed 2D resolution and voxel size for every image which mitigates the need of a resizing operation in the data preparation process. Non-enhancing data preprocessing was applied and five-fold cross-validation was used to evaluate the fully automatic segmentation approach. We show it outperforms the traditional methods that were previously applied on the private dataset, as well as outperforming other comparable state-of-the-art 2D models on the public dataset PROMISE12.


2020 ◽  
Vol 12 (2) ◽  
pp. 80-88
Author(s):  
Claudia Kenyta ◽  
Daniel Martomanggolo Wonohadidjojo

When the photos are taken in low light condition, the quality of the results will not meet their expectation. Image Enhancement method can be used to enhance the quality of the photos taken in low light condition. One of the algorithms used is called Histogram Equalization (HE), that works using Histogram basis. The superiority of HE algorithm in enhancing the quality of the photos taken in low light condition is the simplicity of the algorithm itself and it does not need a high specification device for the algorithm to run. One variant of HE algorithm is Contrast Limited Adaptive Histogram Equalization (CLAHE). This paper shows the implementation of HE algorithm and its performance in enhancing the quality of photos taken in low light condition on Android based application and the comparison with CLAHE algorithm. The results show that, HE algorithm is better than CLAHE algorithm.


2019 ◽  
Vol 224 ◽  
pp. 04010
Author(s):  
Viacheslav Voronin

The quality of remotely sensed satellite images depends on the reflected electromagnetic radiation from the earth’s surface features. Lack of consistent and similar amounts of energy reflected by different features from the earth’s surface results in a poor contrast satellite image. Image enhancement is the image processing of improving the quality that the results are more suitable for display or further image analysis. In this paper, we present a detailed model for color image enhancement using the quaternion framework. We introduce a novel quaternionic frequency enhancement algorithm that can combine the color channels and the local and global image processing. The basic idea is to apply the α-rooting image enhancement approach for different image blocks. For this purpose, we split image in moving windows on disjoint blocks. The parameter alfa for every block and the weights for every local and global enhanced image driven through optimization of measure of enhancement (EMEC). Some presented experimental results illustrate the performance of the proposed approach on color satellite images in comparison with the state-of-the-art methods.


Author(s):  
Yiheng Liu ◽  
Zhenxun Yuan ◽  
Wengang Zhou ◽  
Houqiang Li

Video-based person re-identification is a crucial task of matching video sequences of a person across multiple camera views. Generally, features directly extracted from a single frame suffer from occlusion, blur, illumination and posture changes. This leads to false activation or missing activation in some regions, which corrupts the appearance and motion representation. How to explore the abundant spatial-temporal information in video sequences is the key to solve this problem. To this end, we propose a Refining Recurrent Unit (RRU) that recovers the missing parts and suppresses noisy parts of the current frame’s features by referring historical frames. With RRU, the quality of each frame’s appearance representation is improved. Then we use the Spatial-Temporal clues Integration Module (STIM) to mine the spatial-temporal information from those upgraded features. Meanwhile, the multilevel training objective is used to enhance the capability of RRU and STIM. Through the cooperation of those modules, the spatial and temporal features mutually promote each other and the final spatial-temporal feature representation is more discriminative and robust. Extensive experiments are conducted on three challenging datasets, i.e., iLIDS-VID, PRID-2011 and MARS. The experimental results demonstrate that our approach outperforms existing state-of-the-art methods of video-based person re-identification on iLIDS-VID and MARS and achieves favorable results on PRID-2011.


Sign in / Sign up

Export Citation Format

Share Document