Optimal multi-focus contourlet-based image fusion algorithm selection

2016 ◽  
Author(s):  
Adam Lutz ◽  
Michael Giansiracusa ◽  
Neal Messer ◽  
Soundararajan Ezekiel ◽  
Erik Blasch ◽  
...  
2021 ◽  
Vol 50 (4) ◽  
pp. 228-240
Author(s):  
吉琳娜 Linna JI ◽  
郭小铭 Xiaoming GUO ◽  
杨风暴 Fengbao YANG ◽  
张雅玲 Yaling ZHANG

Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1752
Author(s):  
Linna Ji ◽  
Fengbao Yang ◽  
Xiaoming Guo

Aiming at addressing the problem whereby existing image fusion models cannot reflect the demand of diverse attributes (e.g., type or amplitude) of difference features on algorithms, leading to poor or invalid fusion effect, this paper puts forward the construction and combination of difference features fusion validity distribution based on intuition-possible sets to deal with the selection of algorithms with better fusion effect in dual mode infrared images. Firstly, the distances of the amplitudes of difference features between fused images and source images are calculated. The distances can be divided into three levels according to the fusion result of each algorithm, which are regarded as intuition-possible sets of fusion validity of difference features, and a novel construction method of fusion validity distribution based on intuition-possible sets is proposed. Secondly, in view of multiple amplitude intervals of each difference feature, this paper proposes a distribution combination method based on intuition-possible set ordering. Difference feature score results are aggregated by a fuzzy operator. Joint drop shadows of difference feature score results are obtained. Finally, the experimental results indicate that our proposed method can achieve optimal selection of algorithms that has relatively better effect on the fusion of difference features according to the varied feature amplitudes.


2018 ◽  
Vol 30 (9) ◽  
pp. 1637
Author(s):  
Zhong Xiang ◽  
Jianfeng Zhang ◽  
Miao Qian ◽  
Zhenyu Wu ◽  
Xudong Hu

2019 ◽  
Vol 14 (7) ◽  
pp. 658-666
Author(s):  
Kai-jian Xia ◽  
Jian-qiang Wang ◽  
Jian Cai

Background: Lung cancer is one of the common malignant tumors. The successful diagnosis of lung cancer depends on the accuracy of the image obtained from medical imaging modalities. Objective: The fusion of CT and PET is combining the complimentary and redundant information both images and can increase the ease of perception. Since the existing fusion method sare not perfect enough, and the fusion effect remains to be improved, the paper proposes a novel method called adaptive PET/CT fusion for lung cancer in Piella framework. Methods: This algorithm firstly adopted the DTCWT to decompose the PET and CT images into different components, respectively. In accordance with the characteristics of low-frequency and high-frequency components and the features of PET and CT image, 5 membership functions are used as a combination method so as to determine the fusion weight for low-frequency components. In order to fuse different high-frequency components, we select the energy difference of decomposition coefficients as the match measure, and the local energy as the activity measure; in addition, the decision factor is also determined for the high-frequency components. Results: The proposed method is compared with some of the pixel-level spatial domain image fusion algorithms. The experimental results show that our proposed algorithm is feasible and effective. Conclusion: Our proposed algorithm can better retain and protrude the lesions edge information and the texture information of lesions in the image fusion.


Sign in / Sign up

Export Citation Format

Share Document