scholarly journals A Comprehensive and Automated Fusion Method: The Enhanced Flexible Spatiotemporal DAta Fusion Model for Monitoring Dynamic Changes of Land Surface

2019 ◽  
Vol 9 (18) ◽  
pp. 3693 ◽  
Author(s):  
Shi ◽  
Wang ◽  
Zhang ◽  
Liang ◽  
Niu ◽  
...  

Spatiotemporal fusion methods provide an effective way to generate both high temporal and high spatial resolution data for monitoring dynamic changes of land surface. But existing fusion methods face two main challenges of monitoring the abrupt change events and accurately preserving the spatial details of objects. The Flexible Spatiotemporal DAta Fusion method (FSDAF) was proposed, which can monitor the abrupt change events, but its predicted images lacked intra-class variability and spatial details. To overcome the above limitations, this study proposed a comprehensive and automated fusion method, the Enhanced FSDAF (EFSDAF) method and tested it for Landsat–MODIS image fusion. Compared with FSDAF, the EFSDAF has the following strengths: (1) it considers the mixed pixels phenomenon of a Landsat image, and the predicted images by EFSDAF have more intra-class variability and spatial details; (2) it adjusts the differences between Landsat images and MODIS images; and (3) it improves the fusion accuracy in the abrupt change area by introducing a new residual index (RI). Vegetation phenology and flood events were selected to evaluate the performance of EFSDAF. Its performance was compared with the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), the Spatial and Temporal Reflectance Unmixing Model (STRUM), and FSDAF. Results show that EFSDAF can monitor the changes of vegetation (gradual change) and flood (abrupt change), and the fusion images by EFSDAF are the best from both visual and quantitative evaluations. More importantly, EFSDAF can accurately generate the spatial details of the object and has strong robustness. Due to the above advantages of EFSDAF, it has great potential to monitor long-term dynamic changes of land surface.

2020 ◽  
Vol 12 (23) ◽  
pp. 3979
Author(s):  
Shuwei Hou ◽  
Wenfang Sun ◽  
Baolong Guo ◽  
Cheng Li ◽  
Xiaobo Li ◽  
...  

Many spatiotemporal image fusion methods in remote sensing have been developed to blend highly resolved spatial images and highly resolved temporal images to solve the problem of a trade-off between the spatial and temporal resolution from a single sensor. Yet, none of the spatiotemporal fusion methods considers how the various temporal changes between different pixels affect the performance of the fusion results; to develop an improved fusion method, these temporal changes need to be integrated into one framework. Adaptive-SFSDAF extends the existing fusion method that incorporates sub-pixel class fraction change information in Flexible Spatiotemporal DAta Fusion (SFSDAF) by modifying spectral unmixing to select spectral unmixing adaptively in order to greatly improve the efficiency of the algorithm. Accordingly, the main contributions of the proposed adaptive-SFSDAF method are twofold. One is to address the detection of outliers of temporal change in the image during the period between the origin and prediction dates, as these pixels are the most difficult to estimate and affect the performance of the spatiotemporal fusion methods. The other primary contribution is to establish an adaptive unmixing strategy according to the guided mask map, thus effectively eliminating a great number of insignificant unmixed pixels. The proposed method is compared with the state-of-the-art Flexible Spatiotemporal DAta Fusion (FSDAF), SFSDAF, FIT-FC, and Unmixing-Based Data Fusion (UBDF) methods, and the fusion accuracy is evaluated both quantitatively and visually. The experimental results show that adaptive-SFSDAF achieves outstanding performance in balancing computational efficiency and the accuracy of the fusion results.


2020 ◽  
Author(s):  
Aojie Shen ◽  
Yanchen Bo ◽  
Duoduo Hu

<p>Scientific research of land surface dynamics in heterogeneous landscapes often require remote sensing data with high resolutions in both space and time. However, single sensor could not provide such data at both high resolutions. In addition, because of cloud pollution, images are often incomplete. Spatiotemporal data fusion methods is a feasible solution for the aforementioned data problem. However, for existing data fusion methods, it is difficult to address the problem constructed regular and cloud-free dense time-series images with high spatial resolution. To address these limitations of current spatiotemporal data fusion methods, in this paper, we presented a novel data fusion method for fusing multi-source satellite data to generate s a high-resolution, regular and cloud-free time series of satellite images.</p><p>We incorporates geostatistical theory into the fusion method, and takes the pixel value as a random variable which is composed of trend and a zero-mean second-order stationary residual. To fuse satellite images, we use the coarse-resolution image with high frequency observation to capture the trend in time, and use Kriging interpolation to obtain the residual in fine-resolution scale to provide the informative spatial information. In this paper, in order to avoid the smoothing effect caused by spatial interpolation, Kriging interpolation is performed only in time dimension. For certain region, the temporal correlation between pixels is fixed after the data reach stationary. So for getting the weight in temporal Kriging interpolation, we can use the residuals obtained from coarse-resolution images to construct the temporal covariance model. The predicted fine-resolution image can be obtained by returning the trend value of pixel to their own residual until the each pixel value was obtained. The advantage of the algorithm is to accurately predict fine-resolution images in heterogeneous areas by integrating all available information in the time-series images with fine spatial resolution.  </p><p>We tested our method to fuse NDVI of MODIS and Landsat at Bahia State where has heterogeneous landscape, and generated 8-day time series of NDVI for the whole year of 2016 at 30m resolution. By cross-validation, the average R<sup>2 </sup>and RMSE between NDVI from fused images and from observed images can reach 95% and 0.0411, respectively. In addition, experiments demonstrated that our method also can capture correct texture patterns. These promising results demonstrated this novel method can provide effective means to construct regular and cloud-free time series with high spatiotemporal resolution. Theoretically, the method can predict the fine-resolution data required on any given day. Such a capability is helpful for monitoring near-real-time land surface and ecological dynamics at the high-resolution scales most relevant to human activities.</p><p> </p>


2020 ◽  
Vol 32 (5) ◽  
pp. 829-864 ◽  
Author(s):  
Jing Gao ◽  
Peng Li ◽  
Zhikui Chen ◽  
Jianing Zhang

With the wide deployments of heterogeneous networks, huge amounts of data with characteristics of high volume, high variety, high velocity, and high veracity are generated. These data, referred to multimodal big data, contain abundant intermodality and cross-modality information and pose vast challenges on traditional data fusion methods. In this review, we present some pioneering deep learning models to fuse these multimodal big data. With the increasing exploration of the multimodal big data, there are still some challenges to be addressed. Thus, this review presents a survey on deep learning for multimodal data fusion to provide readers, regardless of their original community, with the fundamentals of multimodal deep learning fusion method and to motivate new multimodal data fusion techniques of deep learning. Specifically, representative architectures that are widely used are summarized as fundamental to the understanding of multimodal deep learning. Then the current pioneering multimodal data fusion deep learning models are summarized. Finally, some challenges and future topics of multimodal data fusion deep learning models are described.


2021 ◽  
Author(s):  
An Fang ◽  
Pei Lou ◽  
Jiahui Hu ◽  
Wanqing Zhao ◽  
Ming Feng ◽  
...  

BACKGROUND Pituitary adenoma is one of the most common central nervous system tumors. The diagnosis and treatment of pituitary adenoma are still very difficult. Misdiagnosis and recurrence occur from time to time, and experienced neurosurgeons are in serious shortage. Knowledge graphs can help interns quickly understand the medical knowledge related to pituitary tumor. OBJECTIVE The aim of this paper is to integrate the data of pituitary adenomas from reliable sources and construct a knowledge graph, and use the knowledge graph for knowledge discovery. METHODS A method of constructing a knowledge graph of diseases was introduced and used to build a knowledge graph for pituitary adenoma (KGPA). The schema of the KGPA was manually constructed. Information of pituitary adenoma were automatically extracted from EMR and the medical websites through the CRF model and web wrappers we designed. An entity fusion method was proposed, based on the head and tail entity fusion models, to fuse the data from heterogeneous sources. The disease entities were standardized to ICD-10. RESULTS Data was extracted from 300 EMRs of pituitary adenoma and 4 medical portals. Entity fusion was carried out by using the data fusion model we proposed. The accuracy of the head and tail entity fusion were more than 97%. Part of the triples were selected for evaluation, and the accuracy was 95.4%. CONCLUSIONS This paper introduced an approach to construct KGPA and proposed a data fusion method suitable for medical data. The evaluation results show that the data in KGPA is of high quality. The constructed KGPA can help physicians in their clinical practice.


2019 ◽  
Vol 11 (9) ◽  
pp. 1084 ◽  
Author(s):  
Qiang Shi ◽  
Wujiao Dai ◽  
Rock Santerre ◽  
Zhiwei Li ◽  
Ning Liu

The spatio-temporal random effect (STRE) model, a type of spatio-temporal Kalman filter model, can be used for the fusion of the Global Navigation Satellite System (GNSS) and Interferometric Synthetic Aperture Radar (InSAR) data to generate high spatio-temporal resolution deformation series, assuming that the land deformation is spatially homogeneous in the monitoring area. However, when there are multiple deformation sources in the monitoring area, complex spatial heterogeneity will appear. To improve the fusion accuracy, we propose an enhanced STRE fusion method (eSTRE) by taking spatial heterogeneity into consideration. This new method integrates the spatial heterogeneity constraints in the STRE model by constructing extra-constrained spatial bases for the heterogeneous area. The effectiveness of this method is verified by using simulated data and real land surface deformation data. The results show that eSTRE can reduce the root mean square (RMS) of InSAR interpolation results by 14% and 23% on average for a simulation experiment and Los Angeles experiment, respectively, indicating that the new proposed method (eSTRE) is substantially better than the previous STRE fusion model.


2020 ◽  
Vol 58 (7) ◽  
pp. 5179-5194 ◽  
Author(s):  
Yang Chen ◽  
Ruyin Cao ◽  
Jin Chen ◽  
Xiaolin Zhu ◽  
Ji Zhou ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1789 ◽  
Author(s):  
Yanqin Ge ◽  
Yanrong Li ◽  
Jinyong Chen ◽  
Kang Sun ◽  
Dacheng Li ◽  
...  

Since requirements of related applications for time series remotely-sensed images with high spatial resolution have been hard to be satisfied under current observation conditions of satellite sensors, it is key to reconstruct high-resolution images at specified dates. As an effective data reconstruction technique, spatiotemporal fusion can be used to generate time series land surface parameters with a clear geophysical significance. In this study, an improved fusion model based on the Sparse Representation-Based Spatiotemporal Reflectance Fusion Model (SPSTFM) is developed and assessed with reflectance data from Gaofen-2 Multi-Spectral (GF-2 MS) and Gaofen-1 Wide-Field-View (GF-1 WFV). By introducing a spatially enhanced training method to dictionary training and sparse coding processes, the developed fusion framework is expected to promote the description of high-resolution and low-resolution overcomplete dictionaries. Assessment indices including Average Absolute Deviation (AAD), Root-Mean-Square Error (RMSE), Peak Signal to Noise Ratio (PSNR), Correlation Coefficient (CC), spectral angle mapper (SAM), structure similarity (SSIM) and Erreur Relative Global Adimensionnelle de Synthèse (ERGAS) are then used to test employed fusion methods for a parallel comparison. The experimental results show that more accurate prediction of GF-2 MS reflectance than that from the SPSTFM can be obtained and furthermore comparable with popular two-pair based reflectance fusion models like the Spatial and Temporal Adaptive Fusion Model (STARFM) and the Enhanced-STARFM (ESTARFM).


Sign in / Sign up

Export Citation Format

Share Document