scholarly journals Improved Color Attenuation Prior for Single-Image Haze Removal

2019 ◽  
Vol 9 (19) ◽  
pp. 4011 ◽  
Author(s):  
Dat Ngo ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

This paper proposes a single image haze removal algorithm that shows a marked improvement on the color attenuation prior-based method. Through a vast number of experiments on a wide variety of images, it is discovered that there are problems in the color attenuation prior, such as color distortion and background noise, which arise due to the fact that the priors do not hold true in all circumstances. Successful resolution of these problems using the proposed algorithm shows its superior performance to other state-of-the-art methods in terms of both subjective visual quality and quantitative metrics, on both synthetic and natural hazy image datasets. The proposed algorithm also is computationally friendly, due to the use of an efficient quad-decomposition algorithm for atmospheric light estimation and a simple modified hybrid median filter for depth map refinement.

2018 ◽  
Vol 7 (02) ◽  
pp. 23578-23584
Author(s):  
Miss. Anjana Navale ◽  
Prof. Namdev Sawant ◽  
Prof. Umaji Bagal

Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we have used a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.


2020 ◽  
Vol 8 (2) ◽  
pp. 26-31
Author(s):  
Ajeeta Singh Bhadoria ◽  
Vandana Vikas Thakre

Generally computer applications use digital images. Digital image plays a vital role in the analysis and explanation of data, which is in the digital form. Images and videos of outside scenes are generally affected by the bad weather environment such as haze, fog, mist etc. It will result in bad visibility of the scene caused by the lack of quality. This paper exhibits a study about various image defogging techniques to eject the haze from the fog images caught in true world to recuperate a fast and enhanced nature of fog free images. In this paper, we propose a simple but effective the weighted median (WM) filter was first presented as an overview of the standard median filter, where a nonnegative integer weight is assigned to each position in the filter window image .Gaussian and laplacian pyramids are applying Gaussian and laplacian filter in an image in cascade order with different kernel sizes of gaussian and laplacian filter .The dark channel prior is a type of statistics of the haze-free outdoor images. It is based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one-color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of outdoor haze images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a by-product of haze removal and Calculate the PSNR and MSE of three sample images.


Author(s):  
Sunita Shukla ◽  
Silky Pareyani

Conventional designs use multiple image or single image to deal with haze removal. The presented paper uses median filer with modified co-efficient (16 adjacent pixel median) and estimate the transmission map and remove haze from a single input image. The median filter prior(co-efficient) is developed based on the idea that the outdoor visibility of images taken under hazy weather conditions seriously reduced when the distance increases. The thickness of the haze can be estimated effectively and a haze-free image can be recovered by adopting the median filter prior and the new haze imaging model. Our method is stable to image local regions containing objects in different depths. Our experiments showed that the proposed method achieved better results than several state-of-the-art methods, and it can be implemented very quickly. Our method due to its fast speed and the good visual effect is suitable for real-time applications. This work confirms that estimating the transmission map using the distance information instead the color information is a crucial point in image enhancement and especially single image haze removal.


2019 ◽  
Vol 9 (17) ◽  
pp. 3443 ◽  
Author(s):  
Dat Ngo ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

This paper presents a fast and compact hardware implementation using an efficient haze removal algorithm. The algorithm employs a modified hybrid median filter to estimate the hazy particle map, which is subsequently subtracted from the hazy image to recover the haze-free image. Adaptive tone remapping is also used to improve the narrow dynamic range due to haze removal. The computation error of the proposed hardware architecture is minimized compared with the floating-point algorithm. To ensure real-time hardware operation, the proposed architecture utilizes the modified hybrid median filter using the well-known Batcher’s parallel sort. Hardware verification confirmed that high-resolution video standards were processed in real time for haze removal.


Author(s):  
Priyanka . ◽  
Geetanjali Babbar

Image de-fogging in brightness defined to image calculated in a deprived climate like as fog, rain and ocean and pollutants or dust particles. To alter the fog and some other pollutants from the image, various methods are customized, some mainly utilized methods are DCP, Detection, and Classification of foggy images. Haze is an arrangement of dual components, air-light and DA (Direct Attenuation), low image quality and generates various issues in VS (Video Surveillance), Navigation and Target Tracking, etc. So, its removes from an image, several de-fogging approaches have been discussed in this paper. Image De-fogging can attain utilizing several and single image haze removal techniques. The famous methods are discussed in this paper used for image de-fogging in DCP, Depth-map for accurate estimation, Guided Filter, and Transmission methods. These techniques still efficient in removing haze from images have very high time complexity. The guided filter is a new region preservative filter with region enhancement and smoothing. The previous result was a local linear transformation of the Guided Image. It defines a review of the classification and detection technique of a hazy image. This method mitigates the limitations of filtration and DCP and at the same time preserves the image quality. At that time, described the existing image de-fogging methods containing image restoration, contrast improvement, and fusion-based image de-fogging methods.


2019 ◽  
Vol 11 (24) ◽  
pp. 2921 ◽  
Author(s):  
Jingyu Li ◽  
Ying Li ◽  
Yayuan Xiao ◽  
Yunpeng Bai

In order to remove speckle noise from original synthetic aperture radar (SAR) images effectively and efficiently, this paper proposes a hybrid dilated residual attention network (HDRANet) with residual learning for SAR despeckling. Firstly, HDRANet employs the hybrid dilated convolution (HDC) in lightweight network architecture to enlarge the receptive field and aggregate global information. Then, a simple yet effective attention module, convolutional block attention module (CBAM), is integrated into the proposed model to constitute a residual HDC attention block through skip connection, which further enhances representation power and performance of the model. Extensive experimental results on the synthetic and real SAR images demonstrate the superior performance of HDRANet over the state-of-the-art methods in terms of quantitative metrics and visual quality.


Author(s):  
Aiping Yang ◽  
Haixin Wang ◽  
Zhong Ji ◽  
Yanwei Pang ◽  
Ling Shao

Recently, deep learning-based single image dehazing method has been a popular approach to tackle dehazing. However, the existing dehazing approaches are performed directly on the original hazy image, which easily results in image blurring and noise amplifying. To address this issue, the paper proposes a DPDP-Net (Dual-Path in Dual-Path network) framework by employing a hierarchical dual path network. Specifically, the first-level dual-path network consists of a Dehazing Network and a Denoising Network, where the Dehazing Network is responsible for haze removal in the structural layer, and the Denoising Network deals with noise in the textural layer, respectively. And the second-level dual-path network lies in the Dehazing Network, which has an AL-Net (Atmospheric Light Network) and a TM-Net (Transmission Map Network), respectively. Concretely, the AL-Net aims to train the non-uniform atmospheric light, while the TM-Net aims to train the transmission map that reflects the visibility of the image. The final dehazing image is obtained by nonlinearly fusing the output of the Denoising Network and the Dehazing Network. Extensive experiments demonstrate that our proposed DPDP-Net achieves competitive performance against the state-of-the-art methods on both synthetic and real-world images.


Author(s):  
Biao Duan ◽  
Jing Li ◽  
Huaimin Chen ◽  
Yi Ru ◽  
Ze Zhang

This paper focus on the dehazing of a single image captured at nighttime. The current state-of-the-art nighttime dehazing approaches usually suffer from the color shift problem due to the fact that the assumptions enforced underdaytime cannot get applied to the nighttime image directly. The classical dehazing methods try to estimate the transmission mapand accurate light to dehaze a single image. The present basic idea is to firstly separate the light layer from the hazy image and thetransmission map can be computed afterwards. A new layer separation method is proposed to solve the non-globalatmospheric light problem. The present method on some real datasets to show its superior performance is validated.


Author(s):  
Megha Chhabra ◽  
Manoj Kumar Shukla ◽  
Kiran Kumar Ravulakollu

: Latent fingerprints are unintentional finger skin impressions left as ridge patterns at crime scenes. A major challenge in latent fingerprint forensics is the poor quality of the lifted image from the crime scene. Forensics investigators are in permanent search of novel outbreaks of the effective technologies to capture and process low quality image. The accuracy of the results depends upon the quality of the image captured in the beginning, metrics used to assess the quality and thereafter level of enhancement required. The low quality of the image collected by low quality scanners, unstructured background noise, poor ridge quality, overlapping structured noise result in detection of false minutiae and hence reduce the recognition rate. Traditionally, Image segmentation and enhancement is partially done manually using help of highly skilled experts. Using automated systems for this work, differently challenging quality of images can be investigated faster. This survey amplifies the comparative study of various segmentation techniques available for latent fingerprint forensics.


Sign in / Sign up

Export Citation Format

Share Document