scholarly journals Potential of High-Resolution Pléiades Imagery to Monitor Salt Marsh Evolution After Spartina Invasion

2019 ◽  
Vol 11 (8) ◽  
pp. 968 ◽  
Author(s):  
Bárbara Proença ◽  
Frédéric Frappart ◽  
Bertrand Lubac ◽  
Vincent Marieu ◽  
Bertrand Ygorra ◽  
...  

An early assessment of biological invasions is important for initiating conservation strategies. Instrumental progress in high spatial resolution (HSR) multispectral satellite sensors greatly facilitates ecosystems’ monitoring capability at an increasingly smaller scale. However, species detection is still challenging in environments characterized by a high variability of vegetation mixing along with other elements, such as water, sediment, and biofilm. In this study, we explore the potential of Pléiades HSR multispectral images to detect and monitor changes in the salt marshes of the Bay of Arcachon (SW France), after the invasion of Spartina anglica. Due to the small size of Spartina patches, the spatial and temporal monitoring of Spartina species focuses on the analysis of five multispectral images at a spatial resolution of 2 m, acquired at the study site between 2013 and 2017. To distinguish between the different types of vegetation, various techniques for land use classification were evaluated. A description and interpretation of the results are based on a set of ground truth data, including field reflectance, a drone flight, historical aerial photographs, GNSS and photographic surveys. A preliminary qualitative analysis of NDVI maps showed that a multi-temporal approach, taking into account a delayed development of species, could be successfully used to discriminate Spartina species (sp.). Then, supervised and unsupervised classifications, used for the identification of Spartina sp., were evaluated. The performance of the species identification was highly dependent on the degree of environmental noise present in the image, which is season-dependent. The accurate identification of the native Spartina was higher than 75%, a result strongly affected by intra-patch variability and, specifically, by the presence of areas with a low vegetation density. Further, for the invasive Spartina anglica, when using a supervised classifier, rather than an unsupervised one, the accuracy of the classification increases from 10% to 90%. However, both algorithms highly overestimate the areas assigned to this species. Finally, the results highlight that the identification of the invasive species is highly dependent both on the seasonal presence of itinerant biological features and the size of vegetation patches. Further, we believe that the results could be strongly improved by a coupled approach, which combines spectral and spatial information, i.e., pattern-recognition techniques.

2018 ◽  
Vol 10 (11) ◽  
pp. 1827 ◽  
Author(s):  
Ahram Song ◽  
Jaewan Choi ◽  
Youkyung Han ◽  
Yongil Kim

Hyperspectral change detection (CD) can be effectively performed using deep-learning networks. Although these approaches require qualified training samples, it is difficult to obtain ground-truth data in the real world. Preserving spatial information during training is difficult due to structural limitations. To solve such problems, our study proposed a novel CD method for hyperspectral images (HSIs), including sample generation and a deep-learning network, called the recurrent three-dimensional (3D) fully convolutional network (Re3FCN), which merged the advantages of a 3D fully convolutional network (FCN) and a convolutional long short-term memory (ConvLSTM). Principal component analysis (PCA) and the spectral correlation angle (SCA) were used to generate training samples with high probabilities of being changed or unchanged. The strategy assisted in training fewer samples of representative feature expression. The Re3FCN was mainly comprised of spectral–spatial and temporal modules. Particularly, a spectral–spatial module with a 3D convolutional layer extracts the spectral–spatial features from the HSIs simultaneously, whilst a temporal module with ConvLSTM records and analyzes the multi-temporal HSI change information. The study first proposed a simple and effective method to generate samples for network training. This method can be applied effectively to cases with no training samples. Re3FCN can perform end-to-end detection for binary and multiple changes. Moreover, Re3FCN can receive multi-temporal HSIs directly as input without learning the characteristics of multiple changes. Finally, the network could extract joint spectral–spatial–temporal features and it preserved the spatial structure during the learning process through the fully convolutional structure. This study was the first to use a 3D FCN and a ConvLSTM for the remote-sensing CD. To demonstrate the effectiveness of the proposed CD method, we performed binary and multi-class CD experiments. Results revealed that the Re3FCN outperformed the other conventional methods, such as change vector analysis, iteratively reweighted multivariate alteration detection, PCA-SCA, FCN, and the combination of 2D convolutional layers-fully connected LSTM.


2020 ◽  
Vol 12 (6) ◽  
pp. 1009
Author(s):  
Xiaoxiao Feng ◽  
Luxiao He ◽  
Qimin Cheng ◽  
Xiaoyi Long ◽  
Yuxin Yuan

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS–MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.


2021 ◽  
Author(s):  
Rupsa Chakraborty ◽  
Gabor Kereszturi ◽  
Reddy Pullanagari ◽  
Patricia Durance ◽  
Salman Ashraf ◽  
...  

<p>Geochemical mineral prospecting approaches are mostly point-based surveys which then rely on statistical spatial extrapolation methods to cover larger areas of interest. This leads to a trade-off between increasing sampling density and associated attributes (e.g., elemental distribution). Airborne hyperspectral data is typically high-resolution data, whilst being spatially continuous, and spectrally contiguous, providing a versatile baseline to complement ground-based prospecting approaches and monitoring. In this study, we benchmark various shallow and deep feature extraction algorithms, on airborne hyperspectral data at three different spatial resolutions, 0.8 m, 2 m and 3 m. Spatial resolution is a key factor to detailed scale-dependent mineral prospecting and geological mapping. Airborne hyperspectral data has potential to advance our understanding for delineating new mineral deposits. This approach can be further extended to large areas using forthcoming spaceborne hyperspectral platforms, where procuring finer spatial resolution data is highly challenging. The study area is located along the Rise and Shine Shear Zone (RSSZ) within the Otago schist, in the South Island (New Zealand). The RSSZ contains gold and associated hydrothermal sulphides and carbonate minerals that are disseminated through sheared upper green schist facies rocks on the 10-metre scale, as well as localized (metre-scale) quartz-rich zones. Soil and rock samples from 63 locations were collected, scattered around known mineralised and unmineralized zones, providing ground truth data for benchmarking. The separability between the mineralized and the non-mineralised samples through laboratory based spectral datasets was analysed by applying Partial least squares discriminant analysis (PLS-DA) on the XRF spectra and laboratory based hyperspectral data separately. The preliminary results indicate that even in partially vegetated zones mineralised regions can be mapped out relatively accurately from airborne hyperspectral images using orthogonal total variation component analysis (OTVCA). This focuses on feature extraction by optimising a cost function that best fits the hyperspectral data in a lower dimensional feature space while monitoring the spatial smoothness of the features by applying total variation regularization.</p>


2019 ◽  
Vol 11 (9) ◽  
pp. 1005
Author(s):  
Jiahui Qu ◽  
Yunsong Li ◽  
Qian Du ◽  
Wenqian Dong ◽  
Bobo Xi

Hyperspectral pansharpening is an effective technique to obtain a high spatial resolution hyperspectral (HS) image. In this paper, a new hyperspectral pansharpening algorithm based on homomorphic filtering and weighted tensor matrix (HFWT) is proposed. In the proposed HFWT method, open-closing morphological operation is utilized to remove the noise of the HS image, and homomorphic filtering is introduced to extract the spatial details of each band in the denoised HS image. More importantly, a weighted root mean squared error-based method is proposed to obtain the total spatial information of the HS image, and an optimized weighted tensor matrix based strategy is presented to integrate spatial information of the HS image with spatial information of the panchromatic (PAN) image. With the appropriate integrated spatial details injection, the fused HS image is generated by constructing the suitable gain matrix. Experimental results over both simulated and real datasets demonstrate that the proposed HFWT method effectively generates the fused HS image with high spatial resolution while maintaining the spectral information of the original low spatial resolution HS image.


2019 ◽  
Vol 11 (6) ◽  
pp. 690 ◽  
Author(s):  
Shengjie Liu ◽  
Zhixin Qi ◽  
Xia Li ◽  
Anthony Yeh

Object-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine learning, convolutional neural networks (CNNs) have become state-of-the-art algorithms. However, CNNs cannot be easily integrated with OBIA because the processing unit of CNNs is a rectangular image, whereas that of OBIA is an irregular image object. To obtain object-based thematic maps, this study developed a new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data. After producing the classification map by CNN, each image object was labeled with the most frequent land cover category of its pixels. The proposed method was tested on the optical-SAR Sentinel Guangzhou dataset with 10 m spatial resolution, the optical-SAR Zhuhai-Macau local climate zones (LCZ) dataset with 100 m spatial resolution, and a hyperspectral benchmark the University of Pavia with 1.3 m spatial resolution. It outperformed OBIA support vector machine (SVM) and random forest (RF). SVM and RF could benefit more from the combined use of optical and SAR data compared with CNN, whereas spatial information learned by CNN was very effective for classification. With the ability to extract spatial features and maintain object boundaries, the proposed method considerably improved the classification accuracy of urban ground targets. It achieved overall accuracy (OA) of 95.33% for the Sentinel Guangzhou dataset, OA of 77.64% for the Zhuhai-Macau LCZ dataset, and OA of 95.70% for the University of Pavia dataset with only 10 labeled samples per class.


2020 ◽  
Vol 12 (23) ◽  
pp. 3958
Author(s):  
Parwati Sofan ◽  
David Bruce ◽  
Eriita Jones ◽  
M. Rokhis Khomarudin ◽  
Orbita Roswintiarti

This study establishes a new technique for peatland fire detection in tropical environments using Landsat-8 and Sentinel-2. The Tropical Peatland Combustion Algorithm (ToPeCAl) without longwave thermal infrared (TIR) (henceforth known as ToPeCAl-2) was tested on Landsat-8 Operational Land Imager (OLI) data and then applied to Sentinel-2 Multi Spectral Instrument (MSI) data. The research is aimed at establishing peatland fire information at higher spatial resolution and more frequent observation than from Landsat-8 data over Indonesia’s peatlands. ToPeCAl-2 applied to Sentinel-2 was assessed by comparing fires detected from the original ToPeCAl applied to Landsat-8 OLI/Thermal Infrared Sensor (TIRS) verified through comparison with ground truth data. An adjustment of ToPeCAl-2 was applied to minimise false positive errors by implementing pre-process masking for water and permanent bright objects and filtering ToPeCAl-2’s resultant detected fires by implementing contextual testing and cloud masking. Both ToPeCAl-2 with contextual test and ToPeCAl with cloud mask applied to Sentinel-2 provided high detection of unambiguous fire pixels (>95%) at 20 m spatial resolution. Smouldering pixels were less likely to be detected by ToPeCAl-2. The detected smouldering pixels from ToPeCAl-2 applied to Sentinel-2 with contextual testing and with cloud masking were only 35% and 56% correct, respectively; this needs further investigation and validation. These results demonstrate that even in the absence of TIR data, an adjusted ToPeCAl algorithm (ToPeCAl-2) can be applied to detect peatland fires at 20 m resolution with high accuracy especially for flaming. Overall, the implementation of ToPeCAl applied to cost-free and available Landsat-8 and Sentinel-2 data enables regular peatland fire monitoring in tropical environments at higher spatial resolution than other satellite-derived fire products.


2020 ◽  
Vol 12 (6) ◽  
pp. 993 ◽  
Author(s):  
Chen Yi ◽  
Yong-qiang Zhao ◽  
Jonathan Cheung-Wai Chan ◽  
Seong G. Kong

This paper presents a joint spatial-spectral resolution enhancement technique to improve the resolution of multispectral images in the spatial and spectral domain simultaneously. Reconstructed hyperspectral images (HSIs) from an input multispectral image represent the same scene in higher spatial resolution, with more spectral bands of narrower wavelength width than the input multispectral image. Many existing improvement techniques focus on spatial- or spectral-resolution enhancement, which may cause spectral distortions and spatial inconsistency. The proposed scheme introduces virtual intermediate variables to formulate a spectral observation model and a spatial observation model. The models alternately solve spectral dictionary and abundances to reconstruct desired high-resolution HSIs. An initial spectral dictionary is trained from prior HSIs captured in different landscapes. A spatial dictionary trained from a panchromatic image and its sparse coefficients provide high spatial-resolution information. The sparse coefficients are used as constraints to obtain high spatial-resolution abundances. Experiments performed on simulated datasets from AVIRIS/Landsat 7 and a real Hyperion/ALI dataset demonstrate that the proposed method outperforms the state-of-the-art spatial- and spectral-resolution enhancement methods. The proposed method also worked well for combination of exiting spatial- and spectral-resolution enhancement methods.


2018 ◽  
Vol 10 (12) ◽  
pp. 1992 ◽  
Author(s):  
Zixi Xie ◽  
Weiguo Song ◽  
Rui Ba ◽  
Xiaolian Li ◽  
Long Xia

Two of the main remote sensing data resources for forest fire detection have significant drawbacks: geostationary Earth Observation (EO) satellites have high temporal resolution but low spatial resolution, whereas Polar-orbiting systems have high spatial resolution but low temporal resolution. Therefore, the existing forest fire detection algorithms that are based on a single one of these two systems have only exploited temporal or spatial information independently. There are no approaches yet that have effectively merged spatial and temporal characteristics to detect forest fires. This paper fills this gap by presenting a spatiotemporal contextual model (STCM) that fully exploits geostationary data’s spatial and temporal dimensions based on the data from Himawari-8 Satellite. We used an improved robust fitting algorithm to model each pixel’s diurnal temperature cycles (DTC) in the middle and long infrared bands. For each pixel, a Kalman filter was used to blend the DTC to estimate the true background brightness temperature. Subsequently, we utilized the Otsu method to identify the fire after using an MVC (maximum value month composite of NDVI) threshold to test which areas have enough fuel to support such events. Finally, we used a continuous timeslot test to correct the fire detection results. The proposed algorithm was applied to four fire cases in East Asia and Australia in 2016. A comparison of detection results between MODIS Terra and Aqua active fire products (MOD14 and MYD14) demonstrated that the proposed algorithm from this paper effectively analyzed the spatiotemporal information contained in multi-temporal remotely sensed data. In addition, this new forest fire detection method can lead to higher detection accuracy than the traditional contextual and temporal algorithms. By developing algorithms that are based on AHI measurements to meet the requirement to detect forest fires promptly and accurately, this paper assists both emergency responders and the general public to mitigate the damage of forest fires.


Sign in / Sign up

Export Citation Format

Share Document