scholarly journals High-Resolution Estimates of Fire Severity—An Evaluation of UAS Image and LiDAR Mapping Approaches on a Sedgeland Forest Boundary in Tasmania, Australia

Fire ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 14
Author(s):  
Samuel Hillman ◽  
Bryan Hally ◽  
Luke Wallace ◽  
Darren Turner ◽  
Arko Lucieer ◽  
...  

With an increase in the frequency and severity of wildfires across the globe and resultant changes to long-established fire regimes, the mapping of fire severity is a vital part of monitoring ecosystem resilience and recovery. The emergence of unoccupied aircraft systems (UAS) and compact sensors (RGB and LiDAR) provide new opportunities to map fire severity. This paper conducts a comparison of metrics derived from UAS Light Detecting and Ranging (LiDAR) point clouds and UAS image based products to classify fire severity. A workflow which derives novel metrics describing vegetation structure and fire severity from UAS remote sensing data is developed that fully utilises the vegetation information available in both data sources. UAS imagery and LiDAR data were captured pre- and post-fire over a 300 m by 300 m study area in Tasmania, Australia. The study area featured a vegetation gradient from sedgeland vegetation (e.g., button grass 0.2m) to forest (e.g., Eucalyptus obliqua and Eucalyptus globulus 50m). To classify the vegetation and fire severity, a comprehensive set of variables describing structural, textural and spectral characteristics were gathered using UAS images and UAS LiDAR datasets. A recursive feature elimination process was used to highlight the subsets of variables to be included in random forest classifiers. The classifier was then used to map vegetation and severity across the study area. The results indicate that UAS LiDAR provided similar overall accuracy to UAS image and combined (UAS LiDAR and UAS image predictor values) data streams to classify vegetation (UAS image: 80.6%; UAS LiDAR: 78.9%; and Combined: 83.1%) and severity in areas of forest (UAS image: 76.6%, UAS LiDAR: 74.5%; and Combined: 78.5%) and areas of sedgeland (UAS image: 72.4%; UAS LiDAR: 75.2%; and Combined: 76.6%). These results indicate that UAS SfM and LiDAR point clouds can be used to assess fire severity at very high spatial resolution.

2020 ◽  
Vol 12 (7) ◽  
pp. 1145 ◽  
Author(s):  
Dmitry E. Kislov ◽  
Kirill A. Korznikov

Wind disturbances are significant phenomena in forest spatial structure and succession dynamics. They cause changes in biodiversity, impact on forest ecosystems at different spatial scales, and have a strong influence on economics and human beings. The reliable recognition and mapping of windthrow areas are of high importance from the perspective of forest management and nature conservation. Recent research in artificial intelligence and computer vision has demonstrated the incredible potential of neural networks in addressing image classification problems. The most efficient algorithms are based on artificial neural networks of nested and complex architecture (e.g., convolutional neural networks (CNNs)), which are usually referred to by a common term—deep learning. Deep learning provides powerful algorithms for the precise segmentation of remote sensing data. We developed an algorithm based on a U-Net-like CNN, which was trained to recognize windthrow areas in Kunashir Island, Russia. We used satellite imagery of very-high spatial resolution (0.5 m/pixel) as source data. We performed a grid search among 216 parameter combinations defining different U-Net-like architectures. The best parameter combination allowed us to achieve an overall accuracy for recognition of windthrow sites of up to 94% for forested landscapes by coniferous and mixed coniferous forests. We found that the false-positive decisions of our algorithm correspond to either seashore logs, which may look similar to fallen tree trunks, or leafless forest stands. While the former can be rectified by applying a forest mask, the latter requires the usage of additional information, which is not always provided by satellite imagery.


2021 ◽  
Vol 227 ◽  
pp. 03002
Author(s):  
Gayrat Yakubov ◽  
Khamid Mubarakov ◽  
Ilkhomjon Abdullaev ◽  
Azizjon Ruziyev

Reliable information on the real state of agricultural lands will be required to the development of appropriate measures for the rational use of agricultural lands. To obtain such information, it is necessary to keep permanent and systematic records and inventories of land resources. Large-scale special plans and maps will be required for accounting, inventory and classification of agricultural land. Currently in Uzbekistan such cartographic materials are being created on the scale 1: 10 000 and 1: 25 000 by administrative and territorial units, farms or individual land plots. The article considers the issues of creation of special maps of agricultural land in scale 1:10000 on the example of Sharof Rashidov district of Jizzakh region using remote sensing data with very high spatial resolution KOMPSAT-3.


2019 ◽  
Vol 28 (11) ◽  
pp. 840
Author(s):  
Jeremy Arkin ◽  
Nicholas C. Coops ◽  
Txomin Hermosilla ◽  
Lori D. Daniels ◽  
Andrew Plowright

Fire severity mapping is conventionally accomplished through the interpretation of aerial photography or the analysis of moderate- to coarse-spatial-resolution pre- and post-fire satellite imagery. Although these methods are well established, there is a demand from both forest managers and fire scientists for higher-spatial-resolution fire severity maps. This study examines the utility of high-spatial-resolution post-fire imagery and digital aerial photogrammetric point clouds acquired from an unmanned aerial vehicle (UAV) to produce integrated fire severity–land cover maps. To accomplish this, a suite of spectral, structural and textural variables was extracted from the UAV-acquired data. Correlation-based feature selection was used to select subsets of variables to be included in random forest classifiers. These classifiers were then used to produce disturbance-based land cover maps at 5- and 1-m spatial resolutions. By analysing maps produced using different variables, the highest-performing spectral, structural and textural variables were identified. The maps were produced with high overall accuracies (5m, 89.5±1.4%; 1m, 85.4±1.5%), with the 1-m classification produced at slightly lower accuracies. This reduction was attributed to the inclusion of four additional classes, which increased the thematic detail enough to outweigh the differences in accuracy.


Coral Reefs ◽  
2021 ◽  
Author(s):  
E. Casoli ◽  
D. Ventura ◽  
G. Mancini ◽  
D. S. Pace ◽  
A. Belluscio ◽  
...  

AbstractCoralligenous reefs are characterized by large bathymetric and spatial distribution, as well as heterogeneity; in shallow environments, they develop mainly on vertical and sub-vertical rocky walls. Mainly diver-based techniques are carried out to gain detailed information on such habitats. Here, we propose a non-destructive and multi-purpose photo mosaicking method to study and monitor coralligenous reefs developing on vertical walls. High-pixel resolution images using three different commercial cameras were acquired on a 10 m2 reef, to compare the effectiveness of photomosaic method to the traditional photoquadrats technique in quantifying the coralligenous assemblage. Results showed very high spatial resolution and accuracy among the photomosaic acquired with different cameras and no significant differences with photoquadrats in assessing the assemblage composition. Despite the large difference in costs of each recording apparatus, little differences emerged from the assemblage characterization: through the analysis of the three photomosaics twelve taxa/morphological categories covered 97–99% of the sampled surface. Photo mosaicking represents a low-cost method that minimizes the time spent underwater by divers and capable of providing new opportunities for further studies on shallow coralligenous reefs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Guillaume Lassalle ◽  
Sophie Fabre ◽  
Anthony Credoz ◽  
Rémy Hédacq ◽  
Dominique Dubucq ◽  
...  

AbstractMonitoring plant metal uptake is essential for assessing the ecological risks of contaminated sites. While traditional techniques used to achieve this are destructive, Visible Near-Infrared (VNIR) reflectance spectroscopy represents a good alternative to monitor pollution remotely. Based on previous work, this study proposes a methodology for mapping the content of several metals in leaves (Cr, Cu, Ni and Zn) under realistic field conditions and from airborne imaging. For this purpose, the reflectance of Rubus fruticosus L., a pioneer species of industrial brownfields, was linked to leaf metal contents using optimized normalized vegetation indices. High correlations were found between the vegetation indices exploiting pigment-related wavelengths and leaf metal contents (r ≤ − 0.76 for Cr, Cu and Ni, and r ≥ 0.87 for Zn). This allowed predicting the metal contents with good accuracy in the field and on the image, especially Cu and Zn (r ≥ 0.84 and RPD ≥ 2.06). The same indices were applied over the entire study site to map the metal contents at very high spatial resolution. This study demonstrates the potential of remote sensing for assessing metal uptake by plants, opening perspectives of application in risk assessment and phytoextraction monitoring in the context of trace metal pollution.


Sign in / Sign up

Export Citation Format

Share Document