scholarly journals Multiscale Image Matting Based Multi-Focus Image Fusion Technique

Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 472 ◽  
Author(s):  
Sarmad Maqsood ◽  
Umer Javed ◽  
Muhammad Mohsin Riaz ◽  
Muhammad Muzammil ◽  
Fazal Muhammad ◽  
...  

Multi-focus image fusion is a very essential method of obtaining an all focus image from multiple source images. The fused image eliminates the out of focus regions, and the resultant image contains sharp and focused regions. A novel multiscale image fusion system based on contrast enhancement, spatial gradient information and multiscale image matting is proposed to extract the focused region information from multiple source images. In the proposed image fusion approach, the multi-focus source images are firstly refined over an image enhancement algorithm so that the intensity distribution is enhanced for superior visualization. The edge detection method based on a spatial gradient is employed for obtaining the edge information from the contrast stretched images. This improved edge information is further utilized by a multiscale window technique to produce local and global activity maps. Furthermore, a trimap and decision maps are obtained based upon the information provided by these near and far focus activity maps. Finally, the fused image is achieved by using an enhanced decision maps and fusion rule. The proposed multiscale image matting (MSIM) makes full use of the spatial consistency and the correlation among source images and, therefore, obtains superior performance at object boundaries compared to region-based methods. The achievement of the proposed method is compared with some of the latest techniques by performing qualitative and quantitative evaluation.

Diagnostics ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 904
Author(s):  
Shah Rukh Muzammil ◽  
Sarmad Maqsood ◽  
Shahab Haider ◽  
Robertas Damaševičius

Technology-assisted clinical diagnosis has gained tremendous importance in modern day healthcare systems. To this end, multimodal medical image fusion has gained great attention from the research community. There are several fusion algorithms that merge Computed Tomography (CT) and Magnetic Resonance Images (MRI) to extract detailed information, which is used to enhance clinical diagnosis. However, these algorithms exhibit several limitations, such as blurred edges during decomposition, excessive information loss that gives rise to false structural artifacts, and high spatial distortion due to inadequate contrast. To resolve these issues, this paper proposes a novel algorithm, namely Convolutional Sparse Image Decomposition (CSID), that fuses CT and MR images. CSID uses contrast stretching and the spatial gradient method to identify edges in source images and employs cartoon-texture decomposition, which creates an overcomplete dictionary. Moreover, this work proposes a modified convolutional sparse coding method and employs improved decision maps and the fusion rule to obtain the final fused image. Simulation results using six datasets of multimodal images demonstrate that CSID achieves superior performance, in terms of visual quality and enriched information extraction, in comparison with eminent image fusion algorithms.


2013 ◽  
Vol 401-403 ◽  
pp. 1381-1384 ◽  
Author(s):  
Zi Juan Luo ◽  
Shuai Ding

t is mostly difficult to get an image that contains all relevant objects in focus, because of the limited depth-of-focus of optical lenses. The multifocus image fusion method can solve the problem effectively. Nonsubsampled Contourlet transform has varying directions and multiple scales. When the Nonsubsampled contourlet transform is introduced to image fusion, the characteristics of original images are taken better and more information for fusion is obtained. A new method of multi-focus image fusion based on Nonsubsampled contourlet transform (NSCT) with the fusion rule of region statistics is proposed in this paper. Firstly, different focus images are decomposed using Nonsubsampled contourlet transform. Then low-bands are integrated using the weighted average, high-bands are integrated using region statistics rule. Next the fused image will be obtained by inverse Nonsubsampled contourlet transform. Finally the experimental results are showed and compared with those of method based on Contourlet transform. Experiments show that the approach can achieve better results than the method based on contourlet transform.


Author(s):  
Rajesh Dharmaraj ◽  
Christopher Durairaj Daniel Dharmaraj

Image fusion is used to intensify the quality of images by combining two images of same scene obtained from different techniques. The present work deals with the effective extraction of pixel information from the source images that hold the key to multi focus image fusion. A solely vicinity-based image matting algorithm that relies on the close pixel clusters in the input images and their trimap, is presented in this article. The pixel cluster size, N plays a significant role in deciding the identity of the unknown pixel. The distance between each unknown pixel from foreground and background pixel clusters has been computed based on minimum quasi Euclidean distance. The minimum distance ratio gives the alpha value of each unknown pixel in the image. Finally, the focus regions are blend together to obtain the resultant fused image. On perceiving the results visually and objectively, it is concluded that proposed method works better in extracting the focused pixels and improving fusion quality, compared with other existing fusion methods.


Multi-focus image fusion is the process of integration of pictures of the equivalent view and having various targets into one image. The direct capturing of a 3D scene image is challenging, many multi-focus image fusion techniques are involved in generating it from some images focusing at diverse depths. The two important factors for image fusion is activity level information and fusion rule. The necessity of designing local filters for extracting high-frequency details the activity level information is being implemented, and then by using various elaborated designed rules we consider clarity information of different source images which can obtain a clarity/focus map. However, earlier fusion algorithms will excerpt high-frequency facts by considering neighboring filters and by adopting various fusion conventions to achieve the fused image. However, the performance of the prevailing techniques is hardly adequate. Convolutional neural networks have recently used to solve the problem of multi-focus image fusion. By considering the deep neural network a two-stage boundary aware is proposed to address the issue in this paper. They are: (1) for extracting the entire defocus info of the two basis images deep network is suggested. (2) To handle the patches information extreme away from and close to the focused/defocused boundary, we use Inception ResNet v2. The results illustrate that the approach specified in this paper will result in an agreeable fusion image, which is superior to some of the advanced fusion algorithms in comparison with both the graphical and objective evaluations.


2021 ◽  
Vol 38 (3) ◽  
pp. 607-617
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays multimodal image fusion has been majorly utilized as an important processing tool in various image related applications. For capturing useful information different sensors have been developed. Mainly such sensors are infrared (IR) image sensor and visible (VI) image sensor. Fusing both these sensors provides better and accurate scene information. The major application areas where this fused image has been mostly used are military, surveillance, and remote sensing. For better identification of targets and to understand overall scene information, the fused image has to provide better contrast and more edge information. This paper introduces a novel multimodal image fusion method mainly for improving contrast and as well as edge information. Primary step of this algorithm is to resize source images. The 3×3 sharpen filter and morphology hat transform are applied separately on resized IR image and VI image. DWT transform has been used to produce "low-frequency" and "high-frequency" sub-bands. "Filters based mean-weighted fusion rule" and "Filters based max-weighted fusion rule" are newly introduced in this algorithm for combining "low-frequency" sub-bands and "high-frequency" sub-bands respectively. Fused image reconstruction is done with IDWT. Proposed method has outperformed and shown improved results in subjective manner and objectively than similar existing techniques.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 247
Author(s):  
Areeba Ilyas ◽  
Muhammad Shahid Farid ◽  
Muhammad Hassan Khan ◽  
Marcin Grzegorzek

Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. This makes it useful for numerous applications in image enhancement, remote sensing, object recognition, medical imaging, etc. This paper presents a novel multi-focus image fusion algorithm that proposes to group the local connected pixels with similar colors and patterns, usually referred to as superpixels, and use them to separate the focused and de-focused regions of an image. We note that these superpixels are more expressive than individual pixels, and they carry more distinctive statistical properties when compared with other superpixels. The statistical properties of superpixels are analyzed to categorize the pixels as focused or de-focused and to estimate a focus map. A spatial consistency constraint is ensured on the initial focus map to obtain a refined map, which is used in the fusion rule to obtain a single all-in-focus image. Qualitative and quantitative evaluations are performed to assess the performance of the proposed method on a benchmark multi-focus image fusion dataset. The results show that our method produces better quality fused images than existing image fusion techniques.


2019 ◽  
Vol 28 (4) ◽  
pp. 505-516
Author(s):  
Wei-bin Chen ◽  
Mingxiao Hu ◽  
Lai Zhou ◽  
Hongbin Gu ◽  
Xin Zhang

Abstract Multi-focus image fusion means fusing a completely clear image with a set of images of the same scene and under the same imaging conditions with different focus points. In order to get a clear image that contains all relevant objects in an area, the multi-focus image fusion algorithm is proposed based on wavelet transform. Firstly, the multi-focus images were decomposed by wavelet transform. Secondly, the wavelet coefficients of the approximant and detail sub-images are fused respectively based on the fusion rule. Finally, the fused image was obtained by using the inverse wavelet transform. Among them, for the low-frequency and high-frequency coefficients, we present a fusion rule based on the weighted ratios and the weighted gradient with the improved edge detection operator. The experimental results illustrate that the proposed algorithm is effective for retaining the detailed images.


2010 ◽  
Vol 44-47 ◽  
pp. 1982-1986
Author(s):  
Wen Cheng Wang ◽  
Fa Liang Chang

A simple but efficient algorithm for multi-focus image fusion is proposed in this paper, which is realized based on Laplacian pyramid decomposition. Firstly, the Laplacian pyramids for each source images are decomposed separately, and then each level of new Laplacian pyramid is fused by adopting a simple fusion rule. Finally, the end fused image is obtained by inverse Laplacian pyramid transform. The experiment showed that this method is effective and can get a fine fusion result for multi-focus images.


2010 ◽  
Vol 121-122 ◽  
pp. 373-378 ◽  
Author(s):  
Jia Zhao ◽  
Li Lü ◽  
Hui Sun

According to the different frequency areas decomposed by shearlet transform, the selection principles of the lowpass subbands and highpass subbands were discussed respectively. The lowpass subband coefficients of the fused image can be obtained by means of the fusion rule based on the region variation, the highpass subband coefficients can be selected by means of the fusion rule based on the region energy. Experimental results show that comparing with traditional image fusion algorithms, the proposed approach can provide more satisfactory fusion outcome.


2021 ◽  
pp. 1-1
Author(s):  
Jun Chen ◽  
Xuejiao Li ◽  
Linbo Luo ◽  
Jiayi Ma

Sign in / Sign up

Export Citation Format

Share Document