scholarly journals A New Dictionary Construction Based Multimodal Medical Image Fusion Framework

Entropy ◽  
2019 ◽  
Vol 21 (3) ◽  
pp. 267 ◽  
Author(s):  
Fuqiang Zhou ◽  
Xiaosong Li ◽  
Mingxuan Zhou ◽  
Yuanze Chen ◽  
Haishu Tan

Training a good dictionary is the key to a successful image fusion method of sparse representation based models. In this paper, we propose a novel dictionary learning scheme for medical image fusion. First, we reinforce the weak information of images by extracting and adding their multi-layer details to generate the informative patches. Meanwhile, we introduce a simple and effective multi-scale sampling to implement a multi-scale representation of patches while reducing the computational cost. Second, we design a neighborhood energy metric and a multi-scale spatial frequency metric for clustering the image patches with a similar brightness and detail information into each respective patch group. Then, we train the energy sub-dictionary and detail sub-dictionary, respectively by K-SVD. Finally, we combine the sub-dictionaries to construct a final, complete, compact and informative dictionary. As a main contribution, the proposed online dictionary learning can not only obtain an informative as well as compact dictionary, but can also address the defects, such as superfluous patch issues and low computation efficiency, in traditional dictionary learning algorithms. The experimental results show that our algorithm is superior to some state-of-the-art dictionary learning based techniques in both subjective visual effects and objective evaluation criteria.

2017 ◽  
Vol 9 (4) ◽  
pp. 61 ◽  
Author(s):  
Guanqiu Qi ◽  
Jinchuan Wang ◽  
Qiong Zhang ◽  
Fancheng Zeng ◽  
Zhiqin Zhu

2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


Author(s):  
Ramya H.R ◽  
B K Sujatha

<p>In recent years, many fast-growing technologies coupled with wide volume of medical data for the digitalization of that data. Thus, researchers have shown their immense interest on Multi-sensor image fusion technologies which convey image information based on data from various sensor modalities into a single image. The image fusion technique is a widespread technique for the diagnosis of medical instrumentation and measurement. Therefore, in this paper we have introduced a novel multimodal sensor medical image fusion method based on type-2 fuzzy logic is proposed using Sugeno model. Moreover, a Gaussian smoothen filter is introduced to extract the detailed information of an image using sharp feature points.Type-2 fuzzy algorithm is used to achieve highly efficient feature points from both the b images to provide high visually classified resultant image. The experimental results demonstrate that the proposed method can achieve better performance than the state-of-the- art methods in terms of visual quality and objective evaluation.</p>


2016 ◽  
Vol 214 ◽  
pp. 471-482 ◽  
Author(s):  
Zhiqin Zhu ◽  
Yi Chai ◽  
Hongpeng Yin ◽  
Yanxia Li ◽  
Zhaodong Liu

Sign in / Sign up

Export Citation Format

Share Document