Integrating segmentation and classification accuracy for accuracy assessment in object-based image analysis

Author(s):  
Nan Li ◽  
Hong Huo ◽  
Tao Fang
2021 ◽  
Vol 13 (4) ◽  
pp. 830
Author(s):  
Adam R. Benjamin ◽  
Amr Abd-Elrahman ◽  
Lyn A. Gettys ◽  
Hartwig H. Hochmair ◽  
Kyle Thayer

This study investigates the use of unmanned aerial systems (UAS) mapping for monitoring the efficacy of invasive aquatic vegetation (AV) management on a floating-leaved AV species, Nymphoides cristata (CFH). The study site consists of 48 treatment plots (TPs). Based on six unique flights over two days at three different flight altitudes while using both a multispectral and RGB sensor, accuracy assessment of the final object-based image analysis (OBIA)-derived classified images yielded overall accuracies ranging from 89.6% to 95.4%. The multispectral sensor was significantly more accurate than the RGB sensor at measuring CFH areal coverage within each TP only with the highest multispectral, spatial resolution (2.7 cm/pix at 40 m altitude). When measuring response in the AV community area between the day of treatment and two weeks after treatment, there was no significant difference between the temporal area change from the reference datasets and the area changes derived from either the RGB or multispectral sensor. Thus, water resource managers need to weigh small gains in accuracy from using multispectral sensors against other operational considerations such as the additional processing time due to increased file sizes, higher financial costs for equipment procurements, and longer flight durations in the field when operating multispectral sensors.


2020 ◽  
Vol 12 (11) ◽  
pp. 1772
Author(s):  
Brian Alan Johnson ◽  
Lei Ma

Image segmentation and geographic object-based image analysis (GEOBIA) were proposed around the turn of the century as a means to analyze high-spatial-resolution remote sensing images. Since then, object-based approaches have been used to analyze a wide range of images for numerous applications. In this Editorial, we present some highlights of image segmentation and GEOBIA research from the last two years (2018–2019), including a Special Issue published in the journal Remote Sensing. As a final contribution of this special issue, we have shared the views of 45 other researchers (corresponding authors of published papers on GEOBIA in 2018–2019) on the current state and future priorities of this field, gathered through an online survey. Most researchers surveyed acknowledged that image segmentation/GEOBIA approaches have achieved a high level of maturity, although the need for more free user-friendly software and tools, further automation, better integration with new machine-learning approaches (including deep learning), and more suitable accuracy assessment methods was frequently pointed out.


Author(s):  
T. Kavzoglu ◽  
M. Yildiz Erdemir ◽  
H. Tonbul

Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.


2018 ◽  
Vol 10 (9) ◽  
pp. 1467 ◽  
Author(s):  
Meghan Halabisky ◽  
Chad Babcock ◽  
L. Moskal

Research related to object-based image analysis has typically relied on data inputs that provide information on the spectral and spatial characteristics of objects, but the temporal domain is far less explored. For some objects, which are spectrally similar to other landscape features, their temporal pattern may be their sole defining characteristic. When multiple images are used in object-based image analysis, it is often constrained to a specific number of images which are selected because they cover the perceived range of temporal variability of the features of interest. Here, we provide a method to identify wetlands using a time series of Landsat imagery by building a Random Forest model using each image observation as an explanatory variable. We tested our approach in Douglas County, Washington, USA. Our approach exploiting the temporal domain classified wetlands with a high level of accuracy and reduced the number of spectrally similar false positives. We explored how sampling design (i.e., random, stratified, purposive) and temporal resolution (i.e., number of image observations) affected classification accuracy. We found that sampling design introduced bias in different ways, but did not have a substantial impact on overall accuracy. We also found that a higher number of image observations up to a point improved classification accuracy dependent on the selection of images used in the model. While time series analysis has been part of pixel-based remote sensing for many decades, with improved computer processing and increased availability of time series datasets (e.g., Landsat archive), it is now much easier to incorporate time series into object-based image analysis classification.


2019 ◽  
Vol 11 (5) ◽  
pp. 503 ◽  
Author(s):  
Sachit Rajbhandari ◽  
Jagannath Aryal ◽  
Jon Osborn ◽  
Arko Lucieer ◽  
Robert Musk

Ontology-driven Geographic Object-Based Image Analysis (O-GEOBIA) contributes to the identification of meaningful objects. In fusing data from multiple sensors, the number of feature variables is increased and object identification becomes a challenging task. We propose a methodological contribution that extends feature variable characterisation. This method is illustrated with a case study in forest-type mapping in Tasmania, Australia. Satellite images, airborne LiDAR (Light Detection and Ranging) and expert photo-interpretation data are fused for feature extraction and classification. Two machine learning algorithms, Random Forest and Boruta, are used to identify important and relevant feature variables. A variogram is used to describe textural and spatial features. Different variogram features are used as input for rule-based classifications. The rule-based classifications employ (i) spectral features, (ii) vegetation indices, (iii) LiDAR, and (iv) variogram features, and resulted in overall classification accuracies of 77.06%, 78.90%, 73.39% and 77.06% respectively. Following data fusion, the use of combined feature variables resulted in a higher classification accuracy (81.65%). Using relevant features extracted from the Boruta algorithm, the classification accuracy is further improved (82.57%). The results demonstrate that the use of relevant variogram features together with spectral and LiDAR features resulted in improved classification accuracy.


2016 ◽  
Vol 42 (3) ◽  
pp. 92-105 ◽  
Author(s):  
Ahmad Hadavand ◽  
Mehdi Mokhtarzadeh ◽  
Mohammad Javad Valadan Zoej ◽  
Saeid Homayouni ◽  
Mohammad Saadatseresht

Object-based image analysis methods have been developed recently. They have since become a very active research topic in the remote sensing community. This is mainly because the researchers have begun to study the spatial structures within the data. In contrast, pixel-based methods only use the spectral content of data. To evaluate the applicability of object-based image analysis methods for land-cover information extraction from hyperspectral data, a comprehensive comparative analysis was performed. In this study, six supervised classification methods were selected from pixel-based category, including the maximum likelihood (ML), fisher linear likelihood (FLL), support vector machine (SVM), binary encoding (BE), spectral angle mapper (SAM) and spectral information divergence (SID). The classifiers were conducted on several features extracted from original spectral bands in order to avoid the problem of the Hughes phenomenon, and obtain a sufficient number of training samples. Three supervised and four unsupervised feature extraction methods were used. Pixel based classification was conducted in the first step of the proposed algorithm. The effective feature number (EFN) was then obtained. Image objects were thereafter created using the fractal net evolution approach (FNEA), the segmentation method implemented in eCognition software. Several experiments have been carried out to find the best segmentation parameters. The classification accuracy of these objects was compared with the accuracy of the pixel-based methods. In these experiments, the Pavia University Campus hyperspectral dataset was used. This dataset was collected by the ROSIS sensor over an urban area in Italy. The results reveal that when using any combination of feature extraction and classification methods, the performance of object-based methods was better than pixel-based ones. Furthermore the statistical analysis of results shows that on average, there is almost an 8 percent improvement in classification accuracy when we use the object-based methods.


2018 ◽  
Vol 10 (8) ◽  
pp. 1319 ◽  
Author(s):  
Sarip Hidayat ◽  
Masayuki MATSUOKA ◽  
Sumbangan Baja ◽  
Dorothea Agnes Rampisela

Sago palm (Metroxylon sagu) is a palm tree species originating in Indonesia. In the future, this starch-producing tree will play an important role in food security and biodiversity. Local governments have begun to emphasize the sustainable development of sago palm plantations; therefore, they require near-real-time geospatial information on palm stands. We developed a semi-automated classification scheme for mapping sago palm using machine learning within an object-based image analysis framework with Pleiades-1A imagery. In addition to spectral information, arithmetic, geometric, and textural features were employed to enhance the classification accuracy. Recursive feature elimination was applied to samples to rank the importance of 26 input features. A support vector machine (SVM) was used to perform classifications and resulted in the highest overall accuracy of 85.00% after inclusion of the eight most important features, including three spectral features, three arithmetic features, and two textural features. The SVM classifier showed normal fitting up to the eighth most important feature. According to the McNemar test results, using the top seven to 14 features provided a better classification accuracy. The significance of this research is the revelation of the most important features in recognizing sago palm among other similar tree species.


Sign in / Sign up

Export Citation Format

Share Document