scholarly journals Real Time Visibility Enhancement for Single Image Haze Removal

2015 ◽  
Vol 54 ◽  
pp. 501-507 ◽  
Author(s):  
Apurva Kumari ◽  
S.K. Sahoo
2014 ◽  
Vol 11 (24) ◽  
pp. 20141002-20141002 ◽  
Author(s):  
Zhengfa Liang ◽  
Hengzhu Liu ◽  
Botao Zhang ◽  
Benzhang Wang

2019 ◽  
Vol 9 (17) ◽  
pp. 3443 ◽  
Author(s):  
Dat Ngo ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

This paper presents a fast and compact hardware implementation using an efficient haze removal algorithm. The algorithm employs a modified hybrid median filter to estimate the hazy particle map, which is subsequently subtracted from the hazy image to recover the haze-free image. Adaptive tone remapping is also used to improve the narrow dynamic range due to haze removal. The computation error of the proposed hardware architecture is minimized compared with the floating-point algorithm. To ensure real-time hardware operation, the proposed architecture utilizes the modified hybrid median filter using the well-known Batcher’s parallel sort. Hardware verification confirmed that high-resolution video standards were processed in real time for haze removal.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5170 ◽  
Author(s):  
Dat Ngo ◽  
Seungmin Lee ◽  
Quoc-Hieu Nguyen ◽  
Tri Minh Ngo ◽  
Gi-Dong Lee ◽  
...  

Vision-based systems operating outdoors are significantly affected by weather conditions, notably those related to atmospheric turbidity. Accordingly, haze removal algorithms, actively being researched over the last decade, have come into use as a pre-processing step. Although numerous approaches have existed previously, an efficient method coupled with fast implementation is still in great demand. This paper proposes a single image haze removal algorithm with a corresponding hardware implementation for facilitating real-time processing. Contrary to methods that invert the physical model describing the formation of hazy images, the proposed approach mainly exploits computationally efficient image processing techniques such as detail enhancement, multiple-exposure image fusion, and adaptive tone remapping. Therefore, it possesses low computational complexity while achieving good performance compared to other state-of-the-art methods. Moreover, the low computational cost also brings about a compact hardware implementation capable of handling high-quality videos at an acceptable rate, that is, greater than 25 frames per second, as verified with a Field Programmable Gate Array chip. The software source code and datasets are available online for public use.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Atmosphere ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 577
Author(s):  
Gabriele Graffieti ◽  
Davide Maltoni

In this paper, we present a novel defogging technique, named CurL-Defog, with the aim of minimizing the insertion of artifacts while maintaining good contrast restoration and visibility enhancement. Many learning-based defogging approaches rely on paired data, where fog is artificially added to clear images; this usually provides good results on mildly fogged images but is not effective for difficult cases. On the other hand, the models trained with real data can produce visually impressive results, but unwanted artifacts are often present. We propose a curriculum learning strategy and an enhanced CycleGAN model to reduce the number of produced artifacts, where both synthetic and real data are used in the training procedure. We also introduce a new metric, called HArD (Hazy Artifact Detector), to numerically quantify the number of artifacts in the defogged images, thus avoiding the tedious and subjective manual inspection of the results. HArD is then combined with other defogging indicators to produce a solid metric that is not deceived by the presence of artifacts. The proposed approach compares favorably with state-of-the-art techniques on both real and synthetic datasets.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 15
Author(s):  
Filippo Aleotti ◽  
Giulio Zaccaroni ◽  
Luca Bartolomei ◽  
Matteo Poggi ◽  
Fabio Tosi ◽  
...  

Depth perception is paramount for tackling real-world problems, ranging from autonomous driving to consumer applications. For the latter, depth estimation from a single image would represent the most versatile solution since a standard camera is available on almost any handheld device. Nonetheless, two main issues limit the practical deployment of monocular depth estimation methods on such devices: (i) the low reliability when deployed in the wild and (ii) the resources needed to achieve real-time performance, often not compatible with low-power embedded systems. Therefore, in this paper, we deeply investigate all these issues, showing how they are both addressable by adopting appropriate network design and training strategies. Moreover, we also outline how to map the resulting networks on handheld devices to achieve real-time performance. Our thorough evaluation highlights the ability of such fast networks to generalize well to new environments, a crucial feature required to tackle the extremely varied contexts faced in real applications. Indeed, to further support this evidence, we report experimental results concerning real-time, depth-aware augmented reality and image blurring with smartphones in the wild.


Author(s):  
Xia Lan ◽  
Liangpei Zhang ◽  
Huanfeng Shen ◽  
Qiangqiang Yuan ◽  
Huifang Li
Keyword(s):  

2020 ◽  
Vol 1706 ◽  
pp. 012091
Author(s):  
Mohammed Shoaib ◽  
Mohd Mohsin ◽  
Imbeshat Khalid Ansari ◽  
Harshat Maddhesiya ◽  
Upendra Kumar Acharya
Keyword(s):  

2017 ◽  
Vol 77 (11) ◽  
pp. 13513-13530 ◽  
Author(s):  
Bo Jiang ◽  
Hongqi Meng ◽  
Jian Zhao ◽  
Xiaolei Ma ◽  
Siyu Jiang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document