scholarly journals Multi-Focus Image Fusion with Multi-Scale Transform Optimized by Metaheuristic Algorithms

2021 ◽  
Vol 38 (2) ◽  
pp. 247-259
Author(s):  
Asan Ihsan Abas ◽  
Nurdan Akhan Baykan

Focus is limited and singular in many image capture devices. Therefore, different focused objects at different distances are obtained in a single image taken. Image fusion can be defined as the acquisition of multiple focused objects in a single image by combining important information from two or more images into a single image. In this paper, a new multi-focus image fusion method based on Bat Algorithm (BA) is presented in a Multi-Scale Transform (MST) to overcome limitations of standard MST Transform. Firstly, a specific MST (Laplacian Pyramid or Curvelet Transform) is performed on the two source images to obtain their low-pass and high-pass bands. Secondly, optimization algorithms were used to find out optimal weights for coefficients in low-pass bands to improve the accuracy of the fusion image and finally the fused multi-focus image is reconstructed by the inverse MST. The experimental results are compared with different methods using reference and non-reference evaluation metrics to evaluate the performance of image fusion methods.

Multi-focus image fusion has established itself as a useful tool for reducing the amount of raw data and it aims at overcoming imaging cameras’ finite depth of f ield by combining information from multiple images with the same scene. Most of existing fusion algorithms use the method of multi-scale decompositions (MSD) to fuse the s ource images. MSD-based fusion algorithms provide much better performance than the conventional fusion methods .In the image fusion algorithm based on multi-scale decomposition, how to make full use of the characteristics of coefficients to fuse images is a key problem.This paper proposed a modified contourlet transform(MCT) based on wavelets and nonsubsampled directional filter banks(NSDFB). The image is decomposed in wavelet domain,and each highpass subband of wavelets is further decomposed into multiple directional subbands by using NSDFB. The MCT has the important features of directionality and translation invariance. Furthermore, the MCT and a novel region energy strategy are exploited to perform image fusion algorithm. simulation results shows that the proposed method can the fusion results visually and also improve in objective evaluating parameters.


Recent developments in the domain of information technology have made it possible to extract a knowledge of ocean from input images. The knowledge extraction can be performed using a number of operations such as image segmentation. The major objective of image segmentation is to segment focused and non-focused regions from an input image. The field depth of optical lenses is limited. A camera focuses only on those objects which lie in its field depth, rest of the objects are appeared as non-focused or blurry. For image processing, it is a general requirement that an input image must be all in focus image. In almost each domain such as medical imaging, weapon and aircraft detection, digital photography, and agriculture imaging, it is required to have an all-in focused input image. Image fusion is a process which combines two or more input images to create an all in focused complimentary fused image. Image fusion is considered as a challenging task due to irregular boundaries of focused and non-focused regions. In literature, multiple studies have addressed this issue, however they have reported promising results in creating a fully focused fused image. Moreover, they have considered different features to identify focused and non-focused regions from an input image. For better estimation of focused and non-focused regions,an ensemble of multiple features such as shape and texture-based features can be employed. Furthermore, it is required to obtain optimal weights which are to be assigned to each feature for creating a fused image. The focus of this study is to perform a multi-focus image fusion using an ensemble of multiple local features by weight optimization using a genetic algorithm. To perform this experimentation, nine multi-focus image datasets are collected where each dataset indicates an image pair of multi-focused images. The reason of this selection is two-fold, as they are publicly available, and it contain different types of multi-focus images. For reconstruction of a fully focused fused image, an ensemble of different shape and texture-based features such as Sobel Operator, Laplacian Operator and Local Binary Pattern is employed along with optimal weights obtained using a Genetic Algorithm. The experimental results have indicated improvement over previous fusion methods


2021 ◽  
pp. 1-1
Author(s):  
Jun Chen ◽  
Xuejiao Li ◽  
Linbo Luo ◽  
Jiayi Ma

2011 ◽  
Vol 145 ◽  
pp. 119-123
Author(s):  
Ko Chin Chang

For general image capture device, it is difficult to obtain an image with every object in focus. To solve the fusion issue of multiple same view point images with different focal settings, a novel image fusion algorithm based on local energy pattern (LGP) is proposed in this paper. Firstly, each focus images is decomposed using discrete wavelet transform (DWT) separately. Secondly, to calculate LGP with the corresponding pixel and its surrounding pixels, then use LGP to compute the new coefficient of the pixel from each transformed images with our proposed weighted fusing rules. The rules use different operations in low-bands coefficients and high-bands coefficients. Finally, the generated image is reconstructed from the new subband coefficients. Moreover, the reconstructed image can represent more detailed for the obtained scene. Experimental results demonstrate that our scheme performs better than the traditional discrete cosine transform (DCT) and discrete wavelet transform (DWT) method in both visual perception and quantitative analysis.


Author(s):  
Chengfang Zhang

Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.


IEEE Access ◽  
2017 ◽  
Vol 5 ◽  
pp. 14898-14913 ◽  
Author(s):  
Yong Yang ◽  
Song Tong ◽  
Shuying Huang ◽  
Pan Lin ◽  
Yuming Fang

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 24
Author(s):  
Yan-Tsung Peng ◽  
He-Hao Liao ◽  
Ching-Fu Chen

In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same scene. Most existing multi-scale exposure fusion (MEF) algorithms assume that the input images are multi-exposed with small EV intervals. However, thanks to emerging spatially multiplexed exposure technology that can capture an image pair of short and long exposure simultaneously, it is essential to deal with two-exposure image fusion. To bring out more well-exposed contents, we generate a more helpful intermediate virtual image for fusion using the proposed Optimized Adaptive Gamma Correction (OAGC) to have better contrast, saturation, and well-exposedness. Fusing the input images with the enhanced virtual image works well even though both inputs are underexposed or overexposed, which other state-of-the-art fusion methods could not handle. The experimental results show that our method performs favorably against other state-of-the-art image fusion methods in generating high-quality fusion results.


2019 ◽  
Vol 9 (17) ◽  
pp. 3612
Author(s):  
Liao ◽  
Chen ◽  
Mo

As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an entirely new multi-focus image fusion method based on decision map and sparse representation (DMSR). First, we obtained a decision map by analyzing low-scale images with sparse representation, measuring the effective clarity level, and using spatial frequency methods to process uncertain areas. Subsequently, the transitional area around the focus boundary was determined by the decision map, and we implemented the transitional area fusion based on sparse representation. The experimental results show that the proposed method is superior to the other five fusion methods, both in terms of visual effect and quantitative evaluation.


Sign in / Sign up

Export Citation Format

Share Document