Statistics analysis on SPOT 5 classification accuracy of different data fusion methods

2009 ◽  
Author(s):  
Guifang Liu ◽  
Heli Lu
2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Wan-Yu Deng ◽  
Dan Liu ◽  
Ying-Ying Dong

Due to missing values, incomplete dataset is ubiquitous in multimodal scene. Complete data is a prerequisite of the most existing multimodality data fusion methods. For incomplete multimodal high-dimensional data, we propose a feature selection and classification method. Our method mainly focuses on extracting the most relevant features from the high-dimensional features and then improving the classification accuracy. The experimental results show that our method produces considerably better performance on incomplete multimodal data such as ADNI dataset and Office dataset, compared to the case of complete data.


Author(s):  
R. Hebbar ◽  
M. V. R. Sesha Sai

Resourcesat-1 satellite with its unique capability of simultaneous acquisition of multispectral images at different spatial resolutions (AWiFS, LISS-III and LISS-IV MX / Mono) has immense potential for crop inventory. The present study was carried for selection of suitable LISS-IV MX band for data fusion and its evaluation for delineation different crops in a multi-cropped area. Image fusion techniques namely intensity hue saturation (IHS), principal component analysis (PCA), brovey, high pass filter (HPF) and wavelet methods were used for merging LISS-III and LISS-IV Mono data. The merged products were evaluated visually and through universal image quality index, ERGAS and classification accuracy. The study revealed that red band of LISS-IV MX data was found to be optimal band for merging with LISS-III data in terms of maintaining both spectral and spatial information and thus, closely matching with multispectral LISS-IVMX data. Among the five data fusion techniques, wavelet method was found to be superior in retaining image quality and higher classification accuracy compared to commonly used methods of IHS, PCA and Brovey. The study indicated that LISS-IV data in mono mode with wider swath of 70 km could be exploited in place of 24km LISS-IVMX data by selection of appropriate fusion techniques by acquiring monochromatic data in the red band.


2020 ◽  
Vol 12 (23) ◽  
pp. 3979
Author(s):  
Shuwei Hou ◽  
Wenfang Sun ◽  
Baolong Guo ◽  
Cheng Li ◽  
Xiaobo Li ◽  
...  

Many spatiotemporal image fusion methods in remote sensing have been developed to blend highly resolved spatial images and highly resolved temporal images to solve the problem of a trade-off between the spatial and temporal resolution from a single sensor. Yet, none of the spatiotemporal fusion methods considers how the various temporal changes between different pixels affect the performance of the fusion results; to develop an improved fusion method, these temporal changes need to be integrated into one framework. Adaptive-SFSDAF extends the existing fusion method that incorporates sub-pixel class fraction change information in Flexible Spatiotemporal DAta Fusion (SFSDAF) by modifying spectral unmixing to select spectral unmixing adaptively in order to greatly improve the efficiency of the algorithm. Accordingly, the main contributions of the proposed adaptive-SFSDAF method are twofold. One is to address the detection of outliers of temporal change in the image during the period between the origin and prediction dates, as these pixels are the most difficult to estimate and affect the performance of the spatiotemporal fusion methods. The other primary contribution is to establish an adaptive unmixing strategy according to the guided mask map, thus effectively eliminating a great number of insignificant unmixed pixels. The proposed method is compared with the state-of-the-art Flexible Spatiotemporal DAta Fusion (FSDAF), SFSDAF, FIT-FC, and Unmixing-Based Data Fusion (UBDF) methods, and the fusion accuracy is evaluated both quantitatively and visually. The experimental results show that adaptive-SFSDAF achieves outstanding performance in balancing computational efficiency and the accuracy of the fusion results.


2020 ◽  
Vol 32 (5) ◽  
pp. 829-864 ◽  
Author(s):  
Jing Gao ◽  
Peng Li ◽  
Zhikui Chen ◽  
Jianing Zhang

With the wide deployments of heterogeneous networks, huge amounts of data with characteristics of high volume, high variety, high velocity, and high veracity are generated. These data, referred to multimodal big data, contain abundant intermodality and cross-modality information and pose vast challenges on traditional data fusion methods. In this review, we present some pioneering deep learning models to fuse these multimodal big data. With the increasing exploration of the multimodal big data, there are still some challenges to be addressed. Thus, this review presents a survey on deep learning for multimodal data fusion to provide readers, regardless of their original community, with the fundamentals of multimodal deep learning fusion method and to motivate new multimodal data fusion techniques of deep learning. Specifically, representative architectures that are widely used are summarized as fundamental to the understanding of multimodal deep learning. Then the current pioneering multimodal data fusion deep learning models are summarized. Finally, some challenges and future topics of multimodal data fusion deep learning models are described.


Sign in / Sign up

Export Citation Format

Share Document