scholarly journals Mapping Vegetation at Species Level with High-Resolution Multispectral and Lidar Data Over a Large Spatial Area: A Case Study with Kudzu

2020 ◽  
Vol 12 (4) ◽  
pp. 609
Author(s):  
Wanwan Liang ◽  
Mongi Abidi ◽  
Luis Carrasco ◽  
Jack McNelis ◽  
Liem Tran ◽  
...  

Mapping vegetation species is critical to facilitate related quantitative assessment, and mapping invasive plants is important to enhance monitoring and management activities. Integrating high-resolution multispectral remote-sensing (RS) images and lidar (light detection and ranging) point clouds can provide robust features for vegetation mapping. However, using multiple sources of high-resolution RS data for vegetation mapping on a large spatial scale can be both computationally and sampling intensive. Here, we designed a two-step classification workflow to potentially decrease computational cost and sampling effort and to increase classification accuracy by integrating multispectral and lidar data in order to derive spectral, textural, and structural features for mapping target vegetation species. We used this workflow to classify kudzu, an aggressive invasive vine, in the entire Knox County (1362 km2) of Tennessee (U.S.). Object-based image analysis was conducted in the workflow. The first-step classification used 320 kudzu samples and extensive, coarsely labeled samples (based on national land cover) to generate an overprediction map of kudzu using random forest (RF). For the second step, 350 samples were randomly extracted from the overpredicted kudzu and labeled manually for the final prediction using RF and support vector machine (SVM). Computationally intensive features were only used for the second-step classification. SVM had constantly better accuracy than RF, and the producer’s accuracy, user’s accuracy, and Kappa for the SVM model on kudzu were 0.94, 0.96, and 0.90, respectively. SVM predicted 1010 kudzu patches covering 1.29 km2 in Knox County. We found the sample size of kudzu used for algorithm training impacted the accuracy and number of kudzu predicted. The proposed workflow could also improve sampling efficiency and specificity. Our workflow had much higher accuracy than the traditional method conducted in this research, and could be easily implemented to map kudzu in other regions as well as map other vegetation species.

Author(s):  
Wanwan Liang ◽  
Mongi Abidi ◽  
Luis Carrasco ◽  
Jack McNelis ◽  
Liem Tran ◽  
...  

Mapping vegetation species is critical to facilitate related quantitative assessment, and for invasive plants mapping their distribution is important to enhance monitoring and controlling activities. Integrating high resolution multispectral remote sensing (RS) image and lidar (light detection and ranging) point clouds can provide robust features for vegetation mapping. However, using multiple source of high-resolution RS data for vegetation mapping at large spatial scale can be both computationally and sampling intensive. Here we designed a two-step classification workflow to decrease computational cost and sampling effort, and to increase classification accuracy by integrating multispectral and lidar data to derive spectral, textural, and structural features for mapping target vegetation species. We used this workflow to classify kudzu, an aggressive invasive vine, in the entire Knox County (1,362 km2) of Tennessee, the United States. Object-based image analysis was conducted in the workflow. The first-step classification used 320 kudzu samples and extensive coarsely labeled samples (based on national land cover) to generate an overprediction map of kudzu using random forest (RF). For the second step, 350 samples were randomly extracted from the overpredicted kudzu and labeled manually for the final prediction using RF and support vector machine (SVM). Computationally intensive features were only used for the second-step classification. SVM had constantly better accuracy than RF, and the Producer’s Accuracy, User’s Accuracy, and Kappa for the SVM model on kudzu was 0.94, 0.96, and 0.90, respectively. SVM predicted 1010 kudzu patches covering 1.29 km2 in Knox County. We found the sample size of kudzu used for algorithm training impacted the accuracy and number of kudzu predicted. The proposed workflow could also improve sampling efficiency and specificity. Our workflow had much higher accuracy than the traditional method conducted in this research, and could be easily implemented to map kudzu in other regions or other vegetation species.


Forests ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1697
Author(s):  
Hui Li ◽  
Baoxin Hu ◽  
Qian Li ◽  
Linhai Jing

Deep learning (DL) has shown promising performances in various remote sensing applications as a powerful tool. To explore the great potential of DL in improving the accuracy of individual tree species (ITS) classification, four convolutional neural network models (ResNet-18, ResNet-34, ResNet-50, and DenseNet-40) were employed to classify four tree species using the combined high-resolution satellite imagery and airborne LiDAR data. A total of 1503 samples of four tree species, including maple, pine, locust, and spruce, were used in the experiments. When both WorldView-2 and airborne LiDAR data were used, the overall accuracies (OA) obtained by ResNet-18, ResNet-34, ResNet-50, and DenseNet-40 were 90.9%, 89.1%, 89.1%, and 86.9%, respectively. The OA of ResNet-18 was increased by 4.0% and 1.8% compared with random forest (86.7%) and support vector machine (89.1%), respectively. The experimental results demonstrated that the size of input images impacted on the classification accuracy of ResNet-18. It is suggested that the input size of ResNet models can be determined according to the maximum size of all tree crown sample images. The use of LiDAR intensity image was helpful in improving the accuracies of ITS classification and atmospheric correction is unnecessary when both pansharpened WorldView-2 images and airborne LiDAR data were used.


2020 ◽  
Vol 12 (7) ◽  
pp. 1218
Author(s):  
Laura Tuşa ◽  
Mahdi Khodadadzadeh ◽  
Cecilia Contreras ◽  
Kasra Rafiezadeh Shahi ◽  
Margret Fuchs ◽  
...  

Due to the extensive drilling performed every year in exploration campaigns for the discovery and evaluation of ore deposits, drill-core mapping is becoming an essential step. While valuable mineralogical information is extracted during core logging by on-site geologists, the process is time consuming and dependent on the observer and individual background. Hyperspectral short-wave infrared (SWIR) data is used in the mining industry as a tool to complement traditional logging techniques and to provide a rapid and non-invasive analytical method for mineralogical characterization. Additionally, Scanning Electron Microscopy-based image analyses using a Mineral Liberation Analyser (SEM-MLA) provide exhaustive high-resolution mineralogical maps, but can only be performed on small areas of the drill-cores. We propose to use machine learning algorithms to combine the two data types and upscale the quantitative SEM-MLA mineralogical data to drill-core scale. This way, quasi-quantitative maps over entire drill-core samples are obtained. Our upscaling approach increases result transparency and reproducibility by employing physical-based data acquisition (hyperspectral imaging) combined with mathematical models (machine learning). The procedure is tested on 5 drill-core samples with varying training data using random forests, support vector machines and neural network regression models. The obtained mineral abundance maps are further used for the extraction of mineralogical parameters such as mineral association.


Author(s):  
VLADIMIR NIKULIN ◽  
TIAN-HSIANG HUANG ◽  
GEOFFREY J. MCLACHLAN

The method presented in this paper is novel as a natural combination of two mutually dependent steps. Feature selection is a key element (first step) in our classification system, which was employed during the 2010 International RSCTC data mining (bioinformatics) Challenge. The second step may be implemented using any suitable classifier such as linear regression, support vector machine or neural networks. We conducted leave-one-out (LOO) experiments with several feature selection techniques and classifiers. Based on the LOO evaluations, we decided to use feature selection with the separation type Wilcoxon-based criterion for all final submissions. The method presented in this paper was tested successfully during the RSCTC data mining Challenge, where we achieved the top score in the Basic track.


2008 ◽  
Vol 136 (3) ◽  
pp. 945-963 ◽  
Author(s):  
Jidong Gao ◽  
Ming Xue

Abstract A new efficient dual-resolution (DR) data assimilation algorithm is developed based on the ensemble Kalman filter (EnKF) method and tested using simulated radar radial velocity data for a supercell storm. Radar observations are assimilated on both high-resolution and lower-resolution grids using the EnKF algorithm with flow-dependent background error covariances estimated from the lower-resolution ensemble. It is shown that the flow-dependent and dynamically evolved background error covariances thus estimated are effective in producing quality analyses on the high-resolution grid. The DR method has the advantage of being able to significantly reduce the computational cost of the EnKF analysis. In the system, the lower-resolution ensemble provides the flow-dependent background error covariance, while the single-high-resolution forecast and analysis provides the benefit of higher resolution, which is important for resolving the internal structures of thunderstorms. The relative smoothness of the covariance obtained from the lower 4-km-resolution ensemble does not appear to significantly degrade the quality of analysis. This is because the cross covariance among different variables is of first-order importance for “retrieving” unobserved variables from the radar radial velocity data. For the DR analysis, an ensemble size of 40 appears to be a reasonable choice with the use of a 4-km horizontal resolution in the ensemble and a 1-km resolution in the high-resolution analysis. Several sensitivity tests show that the DR EnKF system is quite robust to different observation errors. A 4-km thinned data resolution is a compromise that is acceptable under the constraint of real-time applications. A data density of 8 km leads to a significant degradation in the analysis.


2000 ◽  
Vol 12 (11) ◽  
pp. 2655-2684 ◽  
Author(s):  
Manfred Opper ◽  
Ole Winther

We derive a mean-field algorithm for binary classification with gaussian processes that is based on the TAP approach originally proposed in statistical physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error, which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler “naive” mean-field theory and support vector machines (SVMs) as limiting cases. For both mean-field algorithms and support vector machines, simulation results for three small benchmark data sets are presented. They show that one may get state-of-the-art performance by using the leave-one-out estimator for model selection and the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The second result is taken as strong support for the internal consistency of the mean-field approach.


Sign in / Sign up

Export Citation Format

Share Document