scholarly journals Adaptive-SFSDAF for Spatiotemporal Image Fusion that Selectively Uses Class Abundance Change Information

2020 ◽  
Vol 12 (23) ◽  
pp. 3979
Author(s):  
Shuwei Hou ◽  
Wenfang Sun ◽  
Baolong Guo ◽  
Cheng Li ◽  
Xiaobo Li ◽  
...  

Many spatiotemporal image fusion methods in remote sensing have been developed to blend highly resolved spatial images and highly resolved temporal images to solve the problem of a trade-off between the spatial and temporal resolution from a single sensor. Yet, none of the spatiotemporal fusion methods considers how the various temporal changes between different pixels affect the performance of the fusion results; to develop an improved fusion method, these temporal changes need to be integrated into one framework. Adaptive-SFSDAF extends the existing fusion method that incorporates sub-pixel class fraction change information in Flexible Spatiotemporal DAta Fusion (SFSDAF) by modifying spectral unmixing to select spectral unmixing adaptively in order to greatly improve the efficiency of the algorithm. Accordingly, the main contributions of the proposed adaptive-SFSDAF method are twofold. One is to address the detection of outliers of temporal change in the image during the period between the origin and prediction dates, as these pixels are the most difficult to estimate and affect the performance of the spatiotemporal fusion methods. The other primary contribution is to establish an adaptive unmixing strategy according to the guided mask map, thus effectively eliminating a great number of insignificant unmixed pixels. The proposed method is compared with the state-of-the-art Flexible Spatiotemporal DAta Fusion (FSDAF), SFSDAF, FIT-FC, and Unmixing-Based Data Fusion (UBDF) methods, and the fusion accuracy is evaluated both quantitatively and visually. The experimental results show that adaptive-SFSDAF achieves outstanding performance in balancing computational efficiency and the accuracy of the fusion results.

2020 ◽  
Author(s):  
Aojie Shen ◽  
Yanchen Bo ◽  
Duoduo Hu

<p>Scientific research of land surface dynamics in heterogeneous landscapes often require remote sensing data with high resolutions in both space and time. However, single sensor could not provide such data at both high resolutions. In addition, because of cloud pollution, images are often incomplete. Spatiotemporal data fusion methods is a feasible solution for the aforementioned data problem. However, for existing data fusion methods, it is difficult to address the problem constructed regular and cloud-free dense time-series images with high spatial resolution. To address these limitations of current spatiotemporal data fusion methods, in this paper, we presented a novel data fusion method for fusing multi-source satellite data to generate s a high-resolution, regular and cloud-free time series of satellite images.</p><p>We incorporates geostatistical theory into the fusion method, and takes the pixel value as a random variable which is composed of trend and a zero-mean second-order stationary residual. To fuse satellite images, we use the coarse-resolution image with high frequency observation to capture the trend in time, and use Kriging interpolation to obtain the residual in fine-resolution scale to provide the informative spatial information. In this paper, in order to avoid the smoothing effect caused by spatial interpolation, Kriging interpolation is performed only in time dimension. For certain region, the temporal correlation between pixels is fixed after the data reach stationary. So for getting the weight in temporal Kriging interpolation, we can use the residuals obtained from coarse-resolution images to construct the temporal covariance model. The predicted fine-resolution image can be obtained by returning the trend value of pixel to their own residual until the each pixel value was obtained. The advantage of the algorithm is to accurately predict fine-resolution images in heterogeneous areas by integrating all available information in the time-series images with fine spatial resolution.  </p><p>We tested our method to fuse NDVI of MODIS and Landsat at Bahia State where has heterogeneous landscape, and generated 8-day time series of NDVI for the whole year of 2016 at 30m resolution. By cross-validation, the average R<sup>2 </sup>and RMSE between NDVI from fused images and from observed images can reach 95% and 0.0411, respectively. In addition, experiments demonstrated that our method also can capture correct texture patterns. These promising results demonstrated this novel method can provide effective means to construct regular and cloud-free time series with high spatiotemporal resolution. Theoretically, the method can predict the fine-resolution data required on any given day. Such a capability is helpful for monitoring near-real-time land surface and ecological dynamics at the high-resolution scales most relevant to human activities.</p><p> </p>


2019 ◽  
Vol 9 (18) ◽  
pp. 3693 ◽  
Author(s):  
Shi ◽  
Wang ◽  
Zhang ◽  
Liang ◽  
Niu ◽  
...  

Spatiotemporal fusion methods provide an effective way to generate both high temporal and high spatial resolution data for monitoring dynamic changes of land surface. But existing fusion methods face two main challenges of monitoring the abrupt change events and accurately preserving the spatial details of objects. The Flexible Spatiotemporal DAta Fusion method (FSDAF) was proposed, which can monitor the abrupt change events, but its predicted images lacked intra-class variability and spatial details. To overcome the above limitations, this study proposed a comprehensive and automated fusion method, the Enhanced FSDAF (EFSDAF) method and tested it for Landsat–MODIS image fusion. Compared with FSDAF, the EFSDAF has the following strengths: (1) it considers the mixed pixels phenomenon of a Landsat image, and the predicted images by EFSDAF have more intra-class variability and spatial details; (2) it adjusts the differences between Landsat images and MODIS images; and (3) it improves the fusion accuracy in the abrupt change area by introducing a new residual index (RI). Vegetation phenology and flood events were selected to evaluate the performance of EFSDAF. Its performance was compared with the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), the Spatial and Temporal Reflectance Unmixing Model (STRUM), and FSDAF. Results show that EFSDAF can monitor the changes of vegetation (gradual change) and flood (abrupt change), and the fusion images by EFSDAF are the best from both visual and quantitative evaluations. More importantly, EFSDAF can accurately generate the spatial details of the object and has strong robustness. Due to the above advantages of EFSDAF, it has great potential to monitor long-term dynamic changes of land surface.


2020 ◽  
Vol 12 (6) ◽  
pp. 1009
Author(s):  
Xiaoxiao Feng ◽  
Luxiao He ◽  
Qimin Cheng ◽  
Xiaoyi Long ◽  
Yuxin Yuan

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS–MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.


2020 ◽  
Vol 32 (5) ◽  
pp. 829-864 ◽  
Author(s):  
Jing Gao ◽  
Peng Li ◽  
Zhikui Chen ◽  
Jianing Zhang

With the wide deployments of heterogeneous networks, huge amounts of data with characteristics of high volume, high variety, high velocity, and high veracity are generated. These data, referred to multimodal big data, contain abundant intermodality and cross-modality information and pose vast challenges on traditional data fusion methods. In this review, we present some pioneering deep learning models to fuse these multimodal big data. With the increasing exploration of the multimodal big data, there are still some challenges to be addressed. Thus, this review presents a survey on deep learning for multimodal data fusion to provide readers, regardless of their original community, with the fundamentals of multimodal deep learning fusion method and to motivate new multimodal data fusion techniques of deep learning. Specifically, representative architectures that are widely used are summarized as fundamental to the understanding of multimodal deep learning. Then the current pioneering multimodal data fusion deep learning models are summarized. Finally, some challenges and future topics of multimodal data fusion deep learning models are described.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


2021 ◽  
Vol 290 ◽  
pp. 02009
Author(s):  
YiZhe Wang ◽  
Guo Liu ◽  
Bai Xue ◽  
Li Guo ◽  
XueLi Zhang

Gaofen-6 (GF-6) has the advantages of wide coverage, multiple resolutions, and multiple bands, and can provide richer information for remote sensing interpretation. Image fusion method is the key process in high resolution remote sensing application. Guangdong-Hong Kong-Macao Greater Bay Area was selected as experiment area and four image fusion methods of HPF (High-Pass Fliter), NND (Nearest Neighbor Diffusion), GS (Gram-Schmidt) and Pansharp were employed to process panchromatic and multispectral imaging. In order to evaluate the result performances, firstly, four kinds of fusion results were evaluated by visual,and then established mean, standard deviation, entropy, average gradient, correlation coefficient and spectral distortion for quantitative evaluation of fusion results. The results demonstrated that The Pansharp algorithm and the GS algorithm have the best comprehensive evaluation of the GF-6 satellite fusion effect, taking into account the color effect of the image and the enhancement of spatial details, which can meet most fusion requirements. For land-based fusion areas, the Pansharp fusion method can be selected, and the GS fusion method can better reflect the water body information and is the most suitable fusion method for GF-6 satellite water body information enhancement among the four algorithms.


2019 ◽  
Vol 9 (17) ◽  
pp. 3612
Author(s):  
Liao ◽  
Chen ◽  
Mo

As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an entirely new multi-focus image fusion method based on decision map and sparse representation (DMSR). First, we obtained a decision map by analyzing low-scale images with sparse representation, measuring the effective clarity level, and using spatial frequency methods to process uncertain areas. Subsequently, the transitional area around the focus boundary was determined by the decision map, and we implemented the transitional area fusion based on sparse representation. The experimental results show that the proposed method is superior to the other five fusion methods, both in terms of visual effect and quantitative evaluation.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shuai Hao ◽  
Beiyi An ◽  
Hu Wen ◽  
Xu Ma ◽  
Keping Yu

Unmanned aerial vehicles, with their inherent fine attributes, such as flexibility, mobility, and autonomy, play an increasingly important role in the Internet of Things (IoT). Airborne infrared and visible image fusion, which constitutes an important data basis for the perception layer of IoT, has been widely used in various fields such as electric power inspection, military reconnaissance, emergency rescue, and traffic management. However, traditional infrared and visible image fusion methods suffer from weak detail resolution. In order to better preserve useful information from source images and produce a more informative image for human observation or unmanned aerial vehicle vision tasks, a novel fusion method based on discrete cosine transform (DCT) and anisotropic diffusion is proposed. First, the infrared and visible images are denoised by using DCT. Second, anisotropic diffusion is applied to the denoised infrared and visible images to obtain the detail and base layers. Third, the base layers are fused by using weighted averaging, and the detail layers are fused by using the Karhunen–Loeve transform, respectively. Finally, the fused image is reconstructed through the linear superposition of the base layer and detail layer. Compared with six other typical fusion methods, the proposed approach shows better fusion performance in both objective and subjective evaluations.


2011 ◽  
Vol 467-469 ◽  
pp. 1092-1096 ◽  
Author(s):  
Guang Ming Zhang ◽  
Zhi Ming Cui

Graph cuts as an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields, meanwhile beamlet transform as time-frequency and multiresolution analysis tool is often used in the domain of image processing, especially for image fusion. By analyzing the characters of DSA medical image, this paper proposes a novel DSA image fusion method which is combining beamlet transform and graph cuts theory. Firstly, the image was decomposed by beamlet transform to obtain the different subbands coefficients. Then an energy function based on graph cuts theory was constructed to adjust the weight of these coefficients to obtain an optimum fusion object. At last, an inverse of the beamlet transform reconstruct a synthesized DSA image which could contain more integrated accurate detail information of blood vessels. By contrast, the efficiency of our method is better than other traditional fusion methods.


Sign in / Sign up

Export Citation Format

Share Document