Integrated fire severity–land cover mapping using very-high-spatial-resolution aerial imagery and point clouds

2019 ◽  
Vol 28 (11) ◽  
pp. 840
Author(s):  
Jeremy Arkin ◽  
Nicholas C. Coops ◽  
Txomin Hermosilla ◽  
Lori D. Daniels ◽  
Andrew Plowright

Fire severity mapping is conventionally accomplished through the interpretation of aerial photography or the analysis of moderate- to coarse-spatial-resolution pre- and post-fire satellite imagery. Although these methods are well established, there is a demand from both forest managers and fire scientists for higher-spatial-resolution fire severity maps. This study examines the utility of high-spatial-resolution post-fire imagery and digital aerial photogrammetric point clouds acquired from an unmanned aerial vehicle (UAV) to produce integrated fire severity–land cover maps. To accomplish this, a suite of spectral, structural and textural variables was extracted from the UAV-acquired data. Correlation-based feature selection was used to select subsets of variables to be included in random forest classifiers. These classifiers were then used to produce disturbance-based land cover maps at 5- and 1-m spatial resolutions. By analysing maps produced using different variables, the highest-performing spectral, structural and textural variables were identified. The maps were produced with high overall accuracies (5m, 89.5±1.4%; 1m, 85.4±1.5%), with the 1-m classification produced at slightly lower accuracies. This reduction was attributed to the inclusion of four additional classes, which increased the thematic detail enough to outweigh the differences in accuracy.

Author(s):  
Xiaoliang Zou ◽  
Guihua Zhao ◽  
Jonathan Li ◽  
Yuanxi Yang ◽  
Yong Fang

With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.


Author(s):  
Xiaoliang Zou ◽  
Guihua Zhao ◽  
Jonathan Li ◽  
Yuanxi Yang ◽  
Yong Fang

With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.


Author(s):  
R. Suresh Kumar ◽  
A. R. Mahesh Balaji

The recent development in satellite sensors provide images with very high spatial resolution that aids detailed mapping of Land Use Land Cover (LULC). But the heterogeneity in the landscapes often results in spectral variation within the same and spectral confusion among different LU/LC classes at finer spatial resolution. This leads to poor classification performances based on traditional spectral-based classification. Many studies have been addressed to improve this classification by incorporating texture information with multispectral images. Although different methods are available to extract textures from the satellite images, only a limited number of studies compared their performance in classification. The major problem with the existing texture measures is either scale/orientation/illumination variant (Haralick textures) or computationally difficult (Gabor textures) or less informative (Local binary pattern). This paper explores the use of texture information captured by Local Multiple Patterns (LMP) for LULC classification in a spectral-spatial classifier framework. LMP preserve more structural information and involves less computational efforts. Thus LMP is expected to be more promising for capturing spatial information from very high spatial resolution images. The proposed method is implemented with spectral bands and LMP derived from WorldView-2 multispectral imagery acquired for Madurai, India study area. The Multi-Layer-Perceptron neural network is used as a classifier. The proposed classification method is compared with LBP and conventional Maximum Likelihood Classification (MLC) separately. The classification results with 89.5% clarify the improvement offered by the LMP for LULC classification in comparison with the conventional approaches.


Sign in / Sign up

Export Citation Format

Share Document