scholarly journals A 4K-Capable FPGA Implementation of Single Image Haze Removal Using Hazy Particle Maps

2019 ◽  
Vol 9 (17) ◽  
pp. 3443 ◽  
Author(s):  
Dat Ngo ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

This paper presents a fast and compact hardware implementation using an efficient haze removal algorithm. The algorithm employs a modified hybrid median filter to estimate the hazy particle map, which is subsequently subtracted from the hazy image to recover the haze-free image. Adaptive tone remapping is also used to improve the narrow dynamic range due to haze removal. The computation error of the proposed hardware architecture is minimized compared with the floating-point algorithm. To ensure real-time hardware operation, the proposed architecture utilizes the modified hybrid median filter using the well-known Batcher’s parallel sort. Hardware verification confirmed that high-resolution video standards were processed in real time for haze removal.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5170 ◽  
Author(s):  
Dat Ngo ◽  
Seungmin Lee ◽  
Quoc-Hieu Nguyen ◽  
Tri Minh Ngo ◽  
Gi-Dong Lee ◽  
...  

Vision-based systems operating outdoors are significantly affected by weather conditions, notably those related to atmospheric turbidity. Accordingly, haze removal algorithms, actively being researched over the last decade, have come into use as a pre-processing step. Although numerous approaches have existed previously, an efficient method coupled with fast implementation is still in great demand. This paper proposes a single image haze removal algorithm with a corresponding hardware implementation for facilitating real-time processing. Contrary to methods that invert the physical model describing the formation of hazy images, the proposed approach mainly exploits computationally efficient image processing techniques such as detail enhancement, multiple-exposure image fusion, and adaptive tone remapping. Therefore, it possesses low computational complexity while achieving good performance compared to other state-of-the-art methods. Moreover, the low computational cost also brings about a compact hardware implementation capable of handling high-quality videos at an acceptable rate, that is, greater than 25 frames per second, as verified with a Field Programmable Gate Array chip. The software source code and datasets are available online for public use.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Spencer Fowers ◽  
Alok Desai ◽  
Dah-Jye Lee ◽  
Dan Ventura ◽  
James Archibald

This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and theeffectively descriptive basis dictionary imageat a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.


Author(s):  
Sunita Shukla ◽  
Silky Pareyani

Conventional designs use multiple image or single image to deal with haze removal. The presented paper uses median filer with modified co-efficient (16 adjacent pixel median) and estimate the transmission map and remove haze from a single input image. The median filter prior(co-efficient) is developed based on the idea that the outdoor visibility of images taken under hazy weather conditions seriously reduced when the distance increases. The thickness of the haze can be estimated effectively and a haze-free image can be recovered by adopting the median filter prior and the new haze imaging model. Our method is stable to image local regions containing objects in different depths. Our experiments showed that the proposed method achieved better results than several state-of-the-art methods, and it can be implemented very quickly. Our method due to its fast speed and the good visual effect is suitable for real-time applications. This work confirms that estimating the transmission map using the distance information instead the color information is a crucial point in image enhancement and especially single image haze removal.


2014 ◽  
Vol 11 (24) ◽  
pp. 20141002-20141002 ◽  
Author(s):  
Zhengfa Liang ◽  
Hengzhu Liu ◽  
Botao Zhang ◽  
Benzhang Wang

2019 ◽  
Vol 9 (19) ◽  
pp. 4011 ◽  
Author(s):  
Dat Ngo ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

This paper proposes a single image haze removal algorithm that shows a marked improvement on the color attenuation prior-based method. Through a vast number of experiments on a wide variety of images, it is discovered that there are problems in the color attenuation prior, such as color distortion and background noise, which arise due to the fact that the priors do not hold true in all circumstances. Successful resolution of these problems using the proposed algorithm shows its superior performance to other state-of-the-art methods in terms of both subjective visual quality and quantitative metrics, on both synthetic and natural hazy image datasets. The proposed algorithm also is computationally friendly, due to the use of an efficient quad-decomposition algorithm for atmospheric light estimation and a simple modified hybrid median filter for depth map refinement.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Sign in / Sign up

Export Citation Format

Share Document