scholarly journals The Generalised Image Fusion Toolkit (GIFT)

2006 ◽  
Author(s):  
Dan Mueller

Image fusion provides a mechanism to combine multiple images into a single representation to aid human visual perception and image processing tasks. Such algorithms endeavour to create a fused image containing the salient information from each source image, without introducing artefacts or inconsistencies. Image fusion is applicable for numerous fields including: defence systems, remote sensing and geoscience, robotics and industrial engineering, and medical imaging. In the medical imaging domain, image fusion may aid diagnosis and surgical planning tasks requiring the segmentation, feature extraction, and/or visualisation of multi-modal datasets.This paper discusses the implementation of an image fusion toolkit built upon the Insight Toolkit (ITK). Based on an existing architecture, the proposed framework (GIFT) offers a ‘plug-and-play’ environment for the construction of n-D multi-scale image fusion methods. We give a brief overview of the toolkit design and demonstrate how to construct image fusion algorithms from low-level components (such as multi-scale methods and feature generators). A number of worked examples for medical applications are presented in Appendix A, including quadrature mirror filter discrete wavelet transform (QMF DWT) image fusion.

2011 ◽  
Vol 255-260 ◽  
pp. 2072-2076
Author(s):  
Yi Yong Han ◽  
Jun Ju Zhang ◽  
Ben Kang Chang ◽  
Yi Hui Yuan ◽  
Hui Xu

Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we present a new approach using structural similarity index for assessing quality in image fusion. The advantages of our measures are that they do not require a reference image and can be easily computed. Numerous simulations demonstrate that our measures are conform to subjective evaluations and can be able to assess different image fusion methods.


2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


Author(s):  
Chengfang Zhang

Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


2015 ◽  
Vol 73 (3) ◽  
Author(s):  
M. M. Elmajri ◽  
M. F. Rahmat ◽  
S. Ibrahim ◽  
N. F. Mohammed ◽  
Seriaznita Hj Mat Said

In this paper, a novel fuzzy fusion method is proposed to combine the images obtained from dual modality (optical and electrodynamics) tomography sensors. The fuzzy rules designed are based on the features of each single mode sensor. Furthermore, the outcome of the proposed method is compared with the two mostly common image fusion methods; principal component analysis (PCA) and discrete wavelet transform (DWT). The fused image results of half flow and full flow solid/gas laboratory phantoms are presented in this paper. Matlab software was used to visualize and analyze the combined images. The results show that the proposed method has produces superior improvement in the quality of fused image for optical and electrodynamics dual mode tomography applications in the case of solid/gas flow.


2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


2013 ◽  
Vol 448-453 ◽  
pp. 3621-3624 ◽  
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Xiao Li Wang

Image fusion method based on the non multi-scale take the original image as object of study, using various fusion rule of image fusion to fuse images, but not decomposition or transform to original images. So, it can also be called simple multi sensor image fusion methods. Its advantages are low computational complexity and simple principle. Image fusion method based on the non multi-scale is currently the most widely used image fusion methods. The basic principle of fuse method is directly to select large gray, small gray and weighted average among pixel on the source image, to fuse into a new image. Simple pixel level image fusion method mainly includes the pixel gray value being average or weighted average, pixel gray value being selected large and pixel gray value being selected small, etc. Basic principle of fusion process was introduced in detail in this paper, and pixel level fusion algorithm at present was summed up. Simulation results on fusion are presented to illustrate the proposed fusion scheme. In practice, fusion algorithm was selected according to imaging characteristics being retained.


The principal resolution of the image fusion is to merging indication from different images; CT (Computed Tomography) scan and an MRI (Magnetic Resonance Imaging) and to obtain more informative image. In this paper various transform based fusion methods like; discrete wavelet transform (DWT) and two specialisms of discrete cosine transform (DCT); DCT variance and DCT variance with consistency verification (DCT variance with CV) and stationary wavelet transform (SWT) image fusion procedures are instigated and associated in terms of image evidence. Fused outcomes attained from these fusion techniques are evaluated through distinctive evaluation metrics. A fused result accomplished from DCT variance with CV followed by DCT variance out performs DWT and SWT based image fusion methodologies. The potentiality of DCT features creates value-added evidence in the output fused image trailed by fused results proficient from DWT and SWT based image fusion methods. The discrete cosine transforms (DCT) stranded methods of image fusion are auxiliary accurate and concert leaning in real time solicitations by energy forte of DCT originated ideologies of stationary images. In this effort, a glowing systematic practice for fusion of multi-focus images based on DCT and its flavors are obtainable and demonstrated that DCT grounded fused outcomes exceed other fusion methodologies


2019 ◽  
Vol 8 (4) ◽  
pp. 3765-3769

The Multifocal image fusion objective in visual sensor networks is to combine the multi-focused images of the same scene into a focused fused image with improved reliability and interpretation. However, the existing discrete wavelet-based fusion algorithms lead artifacts into the fused image due to its shift variance. But shift invariance is essential in image fusion during the reconstruction of the fused image without any loss. The Stationary Wavelet Transform is one of the most precious ones, eliminating shift variance caused by the discrete wavelet transform. And also focus measures are essential for the selection of focused objects in multi-focused images in order to get a fused image with every object in focus. Thus the advantages of Stationary wavelet transform and focus measures are considered for fusion in this paper. The proposed fusion method not only produces a focused fused image without artifacts and its performance is also good compared to other fusion methods.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Yingzhong Tian ◽  
Jie Luo ◽  
Wenjun Zhang ◽  
Tinggang Jia ◽  
Aiguo Wang ◽  
...  

Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT) is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT). Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS) and the Sum Modified Laplacian (SML). Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.


Sign in / Sign up

Export Citation Format

Share Document