scholarly journals Single Image Haze Removal from Image Enhancement Perspective for Real-Time Vision-Based Systems

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5170 ◽  
Author(s):  
Dat Ngo ◽  
Seungmin Lee ◽  
Quoc-Hieu Nguyen ◽  
Tri Minh Ngo ◽  
Gi-Dong Lee ◽  
...  

Vision-based systems operating outdoors are significantly affected by weather conditions, notably those related to atmospheric turbidity. Accordingly, haze removal algorithms, actively being researched over the last decade, have come into use as a pre-processing step. Although numerous approaches have existed previously, an efficient method coupled with fast implementation is still in great demand. This paper proposes a single image haze removal algorithm with a corresponding hardware implementation for facilitating real-time processing. Contrary to methods that invert the physical model describing the formation of hazy images, the proposed approach mainly exploits computationally efficient image processing techniques such as detail enhancement, multiple-exposure image fusion, and adaptive tone remapping. Therefore, it possesses low computational complexity while achieving good performance compared to other state-of-the-art methods. Moreover, the low computational cost also brings about a compact hardware implementation capable of handling high-quality videos at an acceptable rate, that is, greater than 25 frames per second, as verified with a Field Programmable Gate Array chip. The software source code and datasets are available online for public use.

2019 ◽  
Vol 9 (17) ◽  
pp. 3443 ◽  
Author(s):  
Dat Ngo ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

This paper presents a fast and compact hardware implementation using an efficient haze removal algorithm. The algorithm employs a modified hybrid median filter to estimate the hazy particle map, which is subsequently subtracted from the hazy image to recover the haze-free image. Adaptive tone remapping is also used to improve the narrow dynamic range due to haze removal. The computation error of the proposed hardware architecture is minimized compared with the floating-point algorithm. To ensure real-time hardware operation, the proposed architecture utilizes the modified hybrid median filter using the well-known Batcher’s parallel sort. Hardware verification confirmed that high-resolution video standards were processed in real time for haze removal.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5795 ◽  
Author(s):  
Dat Ngo ◽  
Seungmin Lee ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

In recent years, machine vision algorithms have played an influential role as core technologies in several practical applications, such as surveillance, autonomous driving, and object recognition/localization. However, as almost all such algorithms are applicable to clear weather conditions, their performance is severely affected by any atmospheric turbidity. Several image visibility restoration algorithms have been proposed to address this issue, and they have proven to be a highly efficient solution. This paper proposes a novel method to recover clear images from degraded ones. To this end, the proposed algorithm uses a supervised machine learning-based technique to estimate the pixel-wise extinction coefficients of the transmission medium and a novel compensation scheme to rectify the post-dehazing false enlargement of white objects. Also, a corresponding hardware accelerator implemented on a Field Programmable Gate Array chip is in order for facilitating real-time processing, a critical requirement of practical camera-based systems. Experimental results on both synthetic and real image datasets verified the proposed method’s superiority over existing benchmark approaches. Furthermore, the hardware synthesis results revealed that the accelerator exhibits a processing rate of nearly 271.67 Mpixel/s, enabling it to process 4K videos at 30.7 frames per second in real time.


2011 ◽  
pp. 130-174
Author(s):  
Burak Ozer ◽  
Tiehan Lv ◽  
Wayne Wolf

This chapter focuses on real-time processing techniques for the reconstruction of visual information from multiple views and its analysis for human detection and gesture and activity recognition. It presents a review of the main components of three-dimensional visual processing techniques and visual analysis of multiple cameras, i.e., projection of three-dimensional models onto two-dimensional images and three-dimensional visual reconstruction from multiple images. It discusses real-time aspects of these techniques and shows how these aspects affect the software and hardware architectures. Furthermore, the authors present their multiple-camera system to investigate the relationship between the activity recognition algorithms and the architectures required to perform these tasks in real time. The chapter describes the proposed activity recognition method that consists of a distributed algorithm and a data fusion scheme for two and three-dimensional visual analysis, respectively. The authors analyze the available data independencies for this algorithm and discuss the potential architectures to exploit the parallelism resulting from these independencies.


2009 ◽  
Vol 36 (2) ◽  
pp. 307-311
Author(s):  
罗凤武 Luo Fengwu ◽  
王利颖 Wang Liying ◽  
涂霞 Tu Xia ◽  
陈厚来 Chen Houlai

2018 ◽  
Vol 8 (8) ◽  
pp. 1321 ◽  
Author(s):  
Minseo Kim ◽  
Soohwan Yu ◽  
Seonhee Park ◽  
Sangkeun Lee ◽  
Joonki Paik

This paper presents a computationally efficient haze removal and image enhancement methods. The major contribution of the proposed research is two-fold: (i) an accurate atmospheric light estimation using principal component analysis, and (ii) learning-based transmission estimation. To reduce the computational cost, we impose a constraint on the candidate pixels to estimate the haze components in the sub-image. In addition, the proposed method extracts modified haze-relevant features to estimate an accurate transmission using random forest. Experimental results show that the proposed method can provide high-quality results with a significantly reduced computational load compared with existing methods. In addition, we demonstrate that the proposed method can significantly enhance the contrast of low-light images according to the assumption on the visual similarity between the inverted low-light and haze images.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3591 ◽  
Author(s):  
Haidi Zhu ◽  
Haoran Wei ◽  
Baoqing Li ◽  
Xiaobing Yuan ◽  
Nasser Kehtarnavaz

This paper addresses real-time moving object detection with high accuracy in high-resolution video frames. A previously developed framework for moving object detection is modified to enable real-time processing of high-resolution images. First, a computationally efficient method is employed, which detects moving regions on a resized image while maintaining moving regions on the original image with mapping coordinates. Second, a light backbone deep neural network in place of a more complex one is utilized. Third, the focal loss function is employed to alleviate the imbalance between positive and negative samples. The results of the extensive experimentations conducted indicate that the modified framework developed in this paper achieves a processing rate of 21 frames per second with 86.15% accuracy on the dataset SimitMovingDataset, which contains high-resolution images of the size 1920 × 1080.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Mehdi Khoshboresh-Masouleh ◽  
Reza Shah-Hosseini

In this study, an essential application of remote sensing using deep learning functionality is presented. Gaofen-1 satellite mission, developed by the China National Space Administration (CNSA) for the civilian high-definition Earth observation satellite program, provides near-real-time observations for geographical mapping, environment surveying, and climate change monitoring. Cloud and cloud shadow segmentation are a crucial element to enable automatic near-real-time processing of Gaofen-1 images, and therefore, their performances must be accurately validated. In this paper, a robust multiscale segmentation method based on deep learning is proposed to improve the efficiency and effectiveness of cloud and cloud shadow segmentation from Gaofen-1 images. The proposed method first implements feature map based on the spectral-spatial features from residual convolutional layers and the cloud/cloud shadow footprints extraction based on a novel loss function to generate the final footprints. The experimental results using Gaofen-1 images demonstrate the more reasonable accuracy and efficient computational cost achievement of the proposed method compared to the cloud and cloud shadow segmentation performance of two existing state-of-the-art methods.


2013 ◽  
Vol 23 (3) ◽  
pp. 2500305-2500305 ◽  
Author(s):  
H Tan ◽  
M Walby ◽  
W Hennig ◽  
W Warburton ◽  
P Grudberg ◽  
...  

We have developed a digital signal processing module for real time processing of time-division multiplexed data from SQUID-coupled transition-edge sensor microcalorimeter arrays. It is a 3U PXI card consisting of a standardized core processor board and a daughter board. Through fiber-optic links on its front panel, the daughter board receives time-division multiplexed data (comprising error and feedback signals) and clocks from the digital-feedback cards developed at the National Institute of Standards and Technology. After mixing the error signal with the feedback signal in a field-programmable gate array, the daughter board transmits demultiplexed data to the core processor. Real-time processing in the field-programmable gate array of the core processor board includes pulse detection, pileup inspection, pulse height computation, and histogramming into on-board spectrum memory. Data from up to 128 microcalorimeter pixels can be processed by a single module in real time. Energy spectra, waveform, and run statistics data can be read out in real time through the PCI bus by a host computer at a maximum rate of ~100 MB/s. The module's hardware architecture, mechanism for synchronizing with NIST's digital-feedback, and count rate capability are presented.


Sign in / Sign up

Export Citation Format

Share Document