scholarly journals Coastal Zone Classification Based on Multisource Remote Sensing Imagery Fusion

2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Jiahui Li ◽  
Youxin Zhao ◽  
Jiguang Dai ◽  
Hong Zhu

The main objective of this paper was to assess the capability of multisource remote sensing imagery fusion for coastal zone classification. Five scenes of Gaofen- (GF-) 1 optic imagery and four scenes of synthetic aperture radar (SAR) (C-band Sentinel-1 and L-band ALOS-2) imagery were collected and matched. Note that GF-1 is the first satellite of the China high-resolution earth observation system, which acquires multispectral data with decametric spatial resolution, high temporal resolution, and wide coverage. The results showed that based on the comparison of C- and L-band SAR for coastal coverage, it is verified that C band is superior to L band and parameter subsets of σvv0, σvh0, and Dcross can be effectively used for coastal classification. A new fusion method based on the wavelet transform (WT) was also proposed and used for imagery fusion. Statistical values for the mean, entropy, gradient, and correlation coefficient of the proposed method were 67.526, 7.321, 6.440, and 0.955, respectively. We therefore conclude that the result of our proposed method is superior to GF-1 imagery and traditional HIS fusion results. Finally, the classification output was determined along with an assessment of classification accuracy and kappa coefficient. The kappa coefficient and overall accuracy of the classification were 0.8236 and 85.9774%, respectively, so the proposed fusion method had a satisfying performance for coastal coverage mapping.

2020 ◽  
Vol 12 (3) ◽  
pp. 456
Author(s):  
Weiying Xie ◽  
Jian Yang ◽  
Yunsong Li ◽  
Jie Lei ◽  
Jiaping Zhong ◽  
...  

Cloud detection is a significant preprocessing step for increasing the exploitability of remote sensing imagery that faces various levels of difficulty due to the complexity of underlying surfaces, insufficient training data, and redundant information in high-dimensional data. To solve these problems, we propose an unsupervised network for cloud detection (UNCD) on multispectral (MS) and hyperspectral (HS) remote sensing images. The UNCD method enforces discriminative feature learning to obtain the residual error between the original input and the background in deep latent space, which is based on the observation that clouds are sparse and modeled as sparse outliers in remote sensing imagery. The UNCD enforces discriminative feature learning to obtain the residual error between the original input and the background in deep latent space, which is based on the observation that clouds are sparse and modeled as sparse outliers in remote sensing imagery. First, a compact representation of the original imagery is obtained by a latent adversarial learning constrained encoder. Meanwhile, the majority class with sufficient samples (i.e., background pixels) is more accurately reconstructed than the clouds with limited samples by the decoder. An image discriminator is used to prevent the generalization of out-of-class features caused by latent adversarial learning. To further highlight the background information in the deep latent space, a multivariate Gaussian distribution is introduced. In particular, the residual error with clouds highlighted and background samples suppressed is applied in the cloud detection in deep latent space. To evaluate the performance of the proposed UNCD method, experiments were conducted on both MS and HS datasets that were captured by various sensors over various scenes, and the results demonstrate its state-of-the-art performance. The sensors that captured the datasets include Landsat 8, GaoFen-1 (GF-1), and GaoFen-5 (GF-5). Landsat 8 was launched at Vandenberg Air Force Base in California on 11 February 2013, in a mission that was initially known as the Landsat Data Continuity Mission (LDCM). China launched the GF-1 satellite. The GF-5 satellite captures hyperspectral observations in the Chinese Key Projects of High-Resolution Earth Observation System. The overall accuracy (OA) values for Images I and II from the Landsat 8 dataset were 0.9526 and 0.9536, respectively, and the OA values for Images III and IV from the GF-1 wide field of view (WFV) dataset were 0.9957 and 0.9934, respectively. Hence, the proposed method outperformed the other considered methods.


2014 ◽  
Vol 644-650 ◽  
pp. 4360-4363
Author(s):  
Li Na Dong ◽  
Jing Tong ◽  
Chen Yang Wang

Airborne and space remote sensing system are all the important parts of the earth observation system, also being good supplements to each other. Airborne remote sensing has the advantages of being high resolution, good efficiency and flexibility, which makes itself an effective method to rapidly acquire high resolution remote sensing data. Particularly, the technologies of conducting low altitude remote sensing investigation by unmanned aerial vehicles are rapidly developed with a great progress achieved, so there is no doubt that it will plays an important role in the remote sensing geological investigation.


2020 ◽  
Vol 12 (19) ◽  
pp. 3251
Author(s):  
Michael Kalua ◽  
Anna M. Rallings ◽  
Lorenzo Booth ◽  
Josué Medellín-Azuara ◽  
Stefano Carpin ◽  
...  

Small Unmanned Aerial Systems (sUAS) show promise in being able to collect high resolution spatiotemporal data over small extents. Use of such remote sensing platforms also show promise for quantifying uncertainty in more ubiquitous Earth Observation System (EOS) data, such as evapotranspiration and consumptive use of water in agricultural systems. This study compares measurements of evapotranspiration (ET) from a commercial vineyard in California using data collected from sUAS and EOS sources for 10 events over a growing season using multiple ET estimation methods. Results indicate that sUAS ET estimates that include non-canopy pixels are generally lower on average than EOS methods by >0.5 mm day−1. sUAS ET estimates that mask out non-canopy pixels are generally higher than EOS methods by <0.5 mm day−1. Masked sUAS ET estimates are less variable than unmasked sUAS and EOS ET estimates. This study indicates that limited deployment of sUAS can provide important estimates of uncertainty in EOS ET estimations for larger areas and to also improve irrigation management at a local scale.


2015 ◽  
Vol 48 (3) ◽  
pp. 64-69 ◽  
Author(s):  
Qingqing Huang ◽  
Zhongming Zhao ◽  
Qiong Gao ◽  
Yu Meng ◽  
Jianglin Ma ◽  
...  

2005 ◽  
Vol 39 (3) ◽  
pp. 36-48 ◽  
Author(s):  
Gary M. Mineart ◽  
Richard L. Crout

With the growing international emphasis on the Global Earth Observation System of Systems (GEOSS), technological advancements empowering global observations of the ocean's varied constituents are receiving increasing attention. This paper summarizes the major ocean constituents of interest to the GEOSS member states, highlights their importance, and provides an updated review of existing and emerging observation technologies with potential to address the needs of GEOSS. With the importance of global coverage within this international framework, we emphasize space-based remote sensing technologies.


2020 ◽  
Vol 12 (23) ◽  
pp. 3888
Author(s):  
Mingyuan Peng ◽  
Lifu Zhang ◽  
Xuejian Sun ◽  
Yi Cen ◽  
Xiaoyang Zhao

With the growing development of remote sensors, huge volumes of remote sensing data are being utilized in related applications, bringing new challenges to the efficiency and capability of processing huge datasets. Spatiotemporal remote sensing data fusion can restore high spatial and high temporal resolution remote sensing data from multiple remote sensing datasets. However, the current methods require long computing times and are of low efficiency, especially the newly proposed deep learning-based methods. Here, we propose a fast three-dimensional convolutional neural network-based spatiotemporal fusion method (STF3DCNN) using a spatial-temporal-spectral dataset. This method is able to fuse low-spatial high-temporal resolution data (HTLS) and high-spatial low-temporal resolution data (HSLT) in a four-dimensional spatial-temporal-spectral dataset with increasing efficiency, while simultaneously ensuring accuracy. The method was tested using three datasets, and discussions of the network parameters were conducted. In addition, this method was compared with commonly used spatiotemporal fusion methods to verify our conclusion.


Sign in / Sign up

Export Citation Format

Share Document