scholarly journals Image Fusion of CT and MR with Sparse Representation in NSST Domain

2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Chenhui Qiu ◽  
Yuanyuan Wang ◽  
Huan Zhang ◽  
Shunren Xia

Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


2019 ◽  
Vol 14 (7) ◽  
pp. 658-666
Author(s):  
Kai-jian Xia ◽  
Jian-qiang Wang ◽  
Jian Cai

Background: Lung cancer is one of the common malignant tumors. The successful diagnosis of lung cancer depends on the accuracy of the image obtained from medical imaging modalities. Objective: The fusion of CT and PET is combining the complimentary and redundant information both images and can increase the ease of perception. Since the existing fusion method sare not perfect enough, and the fusion effect remains to be improved, the paper proposes a novel method called adaptive PET/CT fusion for lung cancer in Piella framework. Methods: This algorithm firstly adopted the DTCWT to decompose the PET and CT images into different components, respectively. In accordance with the characteristics of low-frequency and high-frequency components and the features of PET and CT image, 5 membership functions are used as a combination method so as to determine the fusion weight for low-frequency components. In order to fuse different high-frequency components, we select the energy difference of decomposition coefficients as the match measure, and the local energy as the activity measure; in addition, the decision factor is also determined for the high-frequency components. Results: The proposed method is compared with some of the pixel-level spatial domain image fusion algorithms. The experimental results show that our proposed algorithm is feasible and effective. Conclusion: Our proposed algorithm can better retain and protrude the lesions edge information and the texture information of lesions in the image fusion.


2021 ◽  
Vol 12 (4) ◽  
pp. 78-97
Author(s):  
Hassiba Talbi ◽  
Mohamed-Khireddine Kholladi

In this paper, the authors propose an algorithm of hybrid particle swarm with differential evolution (DE) operator, termed DEPSO, with the help of a multi-resolution transform named dual tree complex wavelet transform (DTCWT) to solve the problem of multimodal medical image fusion. This hybridizing approach aims to combine algorithms in a judicious manner, where the resulting algorithm will contain the positive features of these different algorithms. This new algorithm decomposes the source images into high-frequency and low-frequency coefficients by the DTCWT, then adopts the absolute maximum method to fuse high-frequency coefficients; the low-frequency coefficients are fused by a weighted average method while the weights are estimated and enhanced by an optimization method to gain optimal results. The authors demonstrate by the experiments that this algorithm, besides its simplicity, provides a robust and efficient way to fuse multimodal medical images compared to existing wavelet transform-based image fusion algorithms.


2020 ◽  
Vol 39 (3) ◽  
pp. 4617-4629
Author(s):  
Chengrui Gao ◽  
Feiqiang Liu ◽  
Hua Yan

Infrared and visible image fusion refers to the technology that merges the visual details of visible images and thermal feature information of infrared images; it has been extensively adopted in numerous image processing fields. In this study, a dual-tree complex wavelet transform (DTCWT) and convolutional sparse representation (CSR)-based image fusion method was proposed. In the proposed method, the infrared images and visible images were first decomposed by dual-tree complex wavelet transform to characterize their high-frequency bands and low-frequency band. Subsequently, the high-frequency bands were enhanced by guided filtering (GF), while the low-frequency band was merged through convolutional sparse representation and choose-max strategy. Lastly, the fused images were reconstructed by inverse DTCWT. In the experiment, the objective and subjective comparisons with other typical methods proved the advantage of the proposed method. To be specific, the results achieved using the proposed method were more consistent with the human vision system and contained more texture detail information.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Jingming Xia ◽  
Yiming Chen ◽  
Aiyue Chen ◽  
Yicai Chen

The clinical assistant diagnosis has a high requirement for the visual effect of medical images. However, the low frequency subband coefficients obtained by the NSCT decomposition are not sparse, which is not conducive to maintaining the details of the source image. To solve these problems, a medical image fusion algorithm combined with sparse representation and pulse coupling neural network is proposed. First, the source image is decomposed into low and high frequency subband coefficients by NSCT transform. Secondly, the K singular value decomposition (K-SVD) method is used to train the low frequency subband coefficients to get the overcomplete dictionary D, and the orthogonal matching pursuit (OMP) algorithm is used to sparse the low frequency subband coefficients to complete the fusion of the low frequency subband sparse coefficients. Then, the pulse coupling neural network (PCNN) is excited by the spatial frequency of the high frequency subband coefficients, and the fusion coefficients of the high frequency subband coefficients are selected according to the number of ignition times. Finally, the fusion medical image is reconstructed by NSCT inverter. The experimental results and analysis show that the algorithm of gray and color image fusion is about 34% and 10% higher than the contrast algorithm in the edge information transfer factor QAB/F index, and the performance of the fusion result is better than the existing algorithm.


2013 ◽  
Vol 834-836 ◽  
pp. 1011-1015 ◽  
Author(s):  
Nian Yi Wang ◽  
Wei Lan Wang ◽  
Xiao Ran Guo

A new image fusion algorithm based on nonsubsampled contourlet transform and spiking cortical model is proposed in this paper. Considering the human visual system characteristics, two different fusion rules are used to fuse the low and high frequency sub-bands of nonsubsampled contourlet transform respectively. A new maximum selection rule is defined to fuse low frequency coefficients. Spatial frequency is used for the fusion rule of high frequency coefficients. Experimental results demonstrate the effectiveness of the proposed fusion method.


2014 ◽  
Vol 530-531 ◽  
pp. 394-402
Author(s):  
Ze Tao Jiang ◽  
Li Wen Zhang ◽  
Le Zhou

At present, image fusion universally exists problem that fuzzy edge, sparse texture. To solve this problem, this study proposes an image fusion method based on the combination of Lifting Wavelet and Median Filter. The method adopts different fusion rules. For the low frequency coefficient, the low frequency scale coefficients have had the convolution do the square respectively to get enhanced edge of the image fusion. Then the details information of original image is extracted by measuring region characteristics. For high frequency coefficient, the high frequency parts are denoised by the Median Filter, and then neighborhood spatial frequency and consistency verification fusion rule is adopted to the fusion of detail sub-images. Compared with Weighted Average and Regional Energy , experimental results show that edge and texture information are the most. Method in study solves the fuzzy edge and sparse texture in a certain degree,which has strong practical value in image fusion.


2013 ◽  
Vol 457-458 ◽  
pp. 1097-1101
Author(s):  
Jun Yong Ma ◽  
Sheng Wei Zhang ◽  
Cai Bing Yue

An image fusion method based on fuzzy regional characteristics is proposed in this paper. After the multi-resolution decomposition of an image, k-mean clustering is firstly done for the low frequency components of the each layer to decompose the low frequency image into important region, sub important region and background region. Then, all areas of the image are fuzzificated and fusion strategies are determined according to their fuzzy membership degrees. Finally, fusion result is obtained by the reconstruction from the multiresolution image representation. Experimental results and fusion quality assessments show the effectiveness of the proposed fusion method.


Author(s):  
LIU BIN ◽  
JIAXIONG PENG

In this paper, image fusion method based on a new class of wavelet — non-separable wavelet with compactly supported, linear phase, orthogonal and dilation matrix [Formula: see text] is presented. We first construct a non-separable wavelet filter bank. Using these filters, the images involved are decomposed into wavelet pyramids. Then the following fusion algorithm was proposed: for low-frequency part, the average value is selected for new pixel value, For the three high-frequency parts of each level, the standard deviation of each image patch over 3×3 window in the high-frequency sub-images is computed as activity measurement. If the standard deviation of the area 3×3 window is bigger than the standard deviation of the corresponding 3×3 window in the other high-frequency sub-image. The center pixel values of the area window that the weighted area energy is bigger are selected. Otherwise the weighted value of the pixel is computed. Then a new fused image is reconstructed. The performance of the method is evaluated using the entropy, cross-entropy, fusion symmetry, root mean square error and peak-to-peak signal-to-noise ratio. The experiment results show that the non-separable wavelet fusion method proposed in this paper is very close to the performance of the Haar separable wavelet fusion method.


Entropy ◽  
2019 ◽  
Vol 21 (12) ◽  
pp. 1135 ◽  
Author(s):  
Xinghua Huang ◽  
Guanqiu Qi ◽  
Hongyan Wei ◽  
Yi Chai ◽  
Jaesung Sim

In multi-modality image fusion, source image decomposition, such as multi-scale transform (MST), is a necessary step and also widely used. However, when MST is directly used to decompose source images into high- and low-frequency components, the corresponding decomposed components are not precise enough for the following infrared-visible fusion operations. This paper proposes a non-subsampled contourlet transform (NSCT) based decomposition method for image fusion, by which source images are decomposed to obtain corresponding high- and low-frequency sub-bands. Unlike MST, the obtained high-frequency sub-bands have different decomposition layers, and each layer contains different information. In order to obtain a more informative fused high-frequency component, maximum absolute value and pulse coupled neural network (PCNN) fusion rules are applied to different sub-bands of high-frequency components. Activity measures, such as phase congruency (PC), local measure of sharpness change (LSCM), and local signal strength (LSS), are designed to enhance the detailed features of fused low-frequency components. The fused high- and low-frequency components are integrated to form a fused image. The experiment results show that the fused images obtained by the proposed method achieve good performance in clarity, contrast, and image information entropy.


Sign in / Sign up

Export Citation Format

Share Document