scholarly journals Anisotropic Diffusion for Details Enhancement in Multiexposure Image Fusion

2013 ◽  
Vol 2013 ◽  
pp. 1-18 ◽  
Author(s):  
Harbinder Singh ◽  
Vinay Kumar ◽  
Sunil Bhooshan

We develop a multiexposure image fusion method based on texture features, which exploits the edge preserving and intraregion smoothing property of nonlinear diffusion filters based on partial differential equations (PDE). With the captured multiexposure image series, we first decompose images into base layers and detail layers to extract sharp details and fine details, respectively. The magnitude of the gradient of the image intensity is utilized to encourage smoothness at homogeneous regions in preference to inhomogeneous regions. Then, we have considered texture features of the base layer to generate a mask (i.e., decision mask) that guides the fusion of base layers in multiresolution fashion. Finally, well-exposed fused image is obtained that combines fused base layer and the detail layers at each scale across all the input exposures. Proposed algorithm skipping complex High Dynamic Range Image (HDRI) generation and tone mapping steps to produce detail preserving image for display on standard dynamic range display devices. Moreover, our technique is effective for blending flash/no-flash image pair and multifocus images, that is, images focused on different targets.

2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Harbinder Singh ◽  
Vinay Kumar ◽  
Sunil Bhooshan

In this paper we propose a novel detail-enhancing exposure fusion approach using nonlinear translation-variant filter (NTF). With the captured Standard Dynamic Range (SDR) images under different exposure settings, first the fine details are extracted based on guided filter. Next, the base layers (i.e., images obtained from NTF) across all input images are fused using multiresolution pyramid. Exposure, contrast, and saturation measures are considered to generate a mask that guides the fusion process of the base layers. Finally, the fused base layer is combined with the extracted fine details to obtain detail-enhanced fused image. The goal is to preserve details in both very dark and extremely bright regions without High Dynamic Range Image (HDRI) representation and tone mapping step. Moreover, we have demonstrated that the proposed method is also suitable for the multifocus image fusion without introducing artifacts.


2014 ◽  
Vol 2014 ◽  
pp. 1-18 ◽  
Author(s):  
Harbinder Singh ◽  
Vinay Kumar ◽  
Sunil Bhooshan

Many recent computational photography techniques play a significant role to avoid limitation of standard digital cameras to handle wide dynamic range of the real-world scenes, containing brightly and poorly illuminated areas. In many of these techniques, it is often desirable to fuse details from images captured at different exposure settings, while avoiding visual artifacts. In this paper we propose a novel technique for exposure fusion in which Weighted Least Squares (WLS) optimization framework is utilized for weight map refinement. Computationally simple texture features (i.e., detail layer extracted with the help of edge preserving filter) and color saturation measure are preferred for quickly generating weight maps to control the contribution from an input set of multiexposure images. Instead of employing intermediate High Dynamic Range (HDR) reconstruction and tone mapping steps, well-exposed fused image is generated for displaying on conventional display devices. A further advantage of the present technique is that it is well suited for multifocus image fusion. Simulation results are compared with a number of existing single resolution and multiresolution techniques to show the benefits of the proposed scheme for variety of cases.


Author(s):  
Jin Wang ◽  
Shenda Li ◽  
Qing Zhu

Abstract With wider luminance range than conventional low dynamic range (LDR) images, high dynamic range (HDR) images are more consistent with human visual system (HVS). Recently, JPEG committee releases a new HDR image compression standard JPEG XT. It decomposes an input HDR image into base layer and extension layer. The base layer code stream provides JPEG (ISO/IEC 10918) backward compatibility, while the extension layer code stream helps to reconstruct the original HDR image. However, this method does not make full use of HVS, causing waste of bits on imperceptible regions to human eyes. In this paper, a visual saliency-based HDR image compression scheme is proposed. The saliency map of tone mapped HDR image is first extracted, then it is used to guide the encoding of extension layer. The compression quality is adaptive to the saliency of the coding region of the image. Extensive experimental results show that our method outperforms JPEG XT profile A, B, C and other state-of-the-art methods. Moreover, our proposed method offers the JPEG compatibility at the same time.


2017 ◽  
Vol 37 (4) ◽  
pp. 0410001
Author(s):  
都琳 Du Lin ◽  
孙华燕 Sun Huayan ◽  
王帅 Wang Shuai ◽  
高宇轩 Gao Yuxuan ◽  
齐莹莹 Qi Yingying

2021 ◽  
Author(s):  
Baori Zhang ◽  
Yonghua Shi ◽  
Yanxin Cui ◽  
Zishun Wang ◽  
Xiyin Chen

Abstract The high dynamic range existing in arc welding with high energy density challenges most of the industrial cameras, causing badly exposed pixels in the captured images and bringing difficulty to the feature detection from internal weld pool. This paper proposes a novel monitoring method called adaptive image fusion, which increases the amount of information contained in the welding image and can be realized on the common industrial camera with low cost. It combines original images captured rapidly by the camera into one fused image and the setting of these images is based on the real time analysis of realistic scene irradiance during the welding. Experiments are carried out to find out the operating window for the adaptive image fusion method, providing the rules for getting a fused image with as much as information as possible. The comparison between the imaging with or without the proposed method proves that the fused image has a wider dynamic range and includes more useful features from the weld pool. The improvement is also verified by extracting both the internal and external features of weld pool within a same fused image with proposed method. The results show that the proposed method can adaptively expand the dynamic range of visual monitoring system with low cost, which benefits the feature extraction from the internal weld pool.


2015 ◽  
Vol 731 ◽  
pp. 193-196
Author(s):  
Jie Li ◽  
Hai Wen Wang ◽  
Xi Xi He

The current HDR (High-Dynamic Range) images gets expensive display devices with low dynamic range of equipment problems, research objectives are presented methods for using ordinary camera fetching and displaying high dynamic range images. General three-color camera’s use is to obtain 3 different exposures of the same scene images, and binary image pyramid, followed by low-level image panning and rotation registration step by step, using HDR Darkroom Photomatix software obtains high dynamic range images ,tone mapping and detail enhancement, using Photoshop software to fine-tune to get the final high-dynamic range images. Visual evaluation and instrumental measurements shows the synthesis of high dynamic range images can increase reflects the brightness of the scene, details and colour information, application and promotion of the value of the method.


Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 451
Author(s):  
Ming Fang ◽  
Xu Liang ◽  
Feiran Fu ◽  
Yansong Song ◽  
Zhen Shao

High-dynamic range imaging technology is an effective method to improve the limitations of a camera’s dynamic range. However, most current high-dynamic imaging technologies are based on image fusion of multiple frames with different exposure levels. Such methods are prone to various phenomena, for example motion artifacts, detail loss and edge effects. In this paper, we combine a dual-channel camera that can output two different gain images simultaneously, a semi-supervised network structure based on an attention mechanism to fuse multiple gain images is proposed. The proposed network structure comprises encoding, fusion and decoding modules. First, the U-Net structure is employed in the encoding module to extract important detailed information in the source image to the maximum extent. Simultaneously, the SENet attention mechanism is employed in the encoding module to assign different weights to different feature channels and emphasis important features. Then, a feature map extracted from the encoding module is input to the decoding module for reconstruction after fusing by the fusion module to obtain a fused image. Experimental results indicate that the fused images obtained by the proposed method demonstrate clear details and high contrast. Compared with other methods, the proposed method improves fused image quality relative to several indicators.


Sign in / Sign up

Export Citation Format

Share Document