LAND-USE CLASSIFICATION OF GRAY-SCALE AERIAL IMAGES USING PROBABILISTIC NEURAL NETWORKS

2004 ◽  
Vol 47 (5) ◽  
pp. 1813-1819 ◽  
Author(s):  
D. Ashish ◽  
G. Hoogenboom ◽  
R. W. McClendon
Author(s):  
A. Gujrathi ◽  
C. Yang ◽  
F. Rottensteiner ◽  
K. M. Buddhiraju ◽  
C. Heipke

Abstract. Land use is an important variable in remote sensing which describes the functions carried out on a piece of land in order to obtain benefits and is especially useful to the personnel working in the fields of urban management and planning. The land use information is maintained by national mapping agencies in geo-spatial databases. Commonly, land use data is stored in the form of polygon objects; the label of the object indicates land use. The main goal of classification of land use objects is to update an existing database in an automatic process. Recently, Convolutional Neural Networks (CNN) have been widely used to tackle this task utilizing high resolution aerial images (and derived data such as digital surface model). One big challenge classifying polygons is to deal with the large variation in their geometrical extent. For this challenge, we adopt the method of Yang et al. (2019) to decompose polygons into regular patches of fixed size. The decomposition leads to two sets of polygons: small and large, where the former suffers from a lower identification rate. In this paper, we propose CNN methods which incorporate dense connectivity and integrate it with intermediate information via global average pooling to improve land use classification, mainly focusing on small polygons. We present different network variants by incorporating intermediate information via global average pooling from different stages of the network. We test our methods on two sites; our experiments show that the dense connectivity and integration of intermediate information has a positive effect not only on the classification accuracy on the whole but also on the identification of small polygons.


Author(s):  
Chun Yang ◽  
Franz Rottensteiner ◽  
Christian Heipke

Land cover describes the physical material of the earth’s surface, whereas land use describes the socio-economic function of a piece of land. Land use information is typically collected in geospatial databases. As such databases become outdated quickly, an automatic update process is required. This paper presents a new approach to determine land cover and to classify land use objects based on convolutional neural networks (CNN). The input data are aerial images and derived data such as digital surface models. Firstly, we apply a CNN to determine the land cover for each pixel of the input image. We compare different CNN structures, all of them based on an encoder-decoder structure for obtaining dense class predictions. Secondly, we propose a new CNN-based methodology for the prediction of the land use label of objects from a geospatial database. In this context, we present a strategy for generating image patches of identical size from the input data, which are classified by a CNN. Again, we compare different CNN architectures. Our experiments show that an overall accuracy of up to 85.7 % and 77.4 % can be achieved for land cover and land use, respectively. The classification of land cover has a positive contribution to the classification of the land use classification.


Author(s):  
L. Albert ◽  
F. Rottensteiner ◽  
C. Heipke

Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a <i>land cover layer</i> and a <i>land use layer</i>. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.


Author(s):  
C. Yang ◽  
F. Rottensteiner ◽  
C. Heipke

<p><strong>Abstract.</strong> Land use and land cover are two important variables in remote sensing. Commonly, the information of land use is stored in geospatial databases. In order to update such databases, we present a new approach to determine the land cover and to classify land use objects using convolutional neural networks (CNN). High-resolution aerial images and derived data such as digital surface models serve as input. An encoder-decoder based CNN is used for land cover classification. We found a composite including the infrared band and height data to outperform RGB images in land cover classification. We also propose a CNN-based methodology for the prediction of land use label from the geospatial databases, where we use masks representing object shape, the RGB images and the pixel-wise class scores of land cover as input. For this task, we developed a two-branch network where the first branch considers the whole area of an image, while the second branch focuses on a smaller relevant area. We evaluated our methods using two sites and achieved an overall accuracy of up to 89.6% and 81.7% for land cover and land use, respectively. We also tested our methods for land cover classification using the Vaihingen dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 90.7%.</p>


Author(s):  
A. Movia ◽  
A. Beinat ◽  
T. Sandri

Very high resolution (VHR) aerial images can provide detailed analysis about landscape and environment; nowadays, thanks to the rapid growing airborne data acquisition technology an increasing number of high resolution datasets are freely available. <br><br> In a VHR image the essential information is contained in the red-green-blue colour components (RGB) and in the texture, therefore a preliminary step in image analysis concerns the classification in order to detect pixels having similar characteristics and to group them in distinct classes. Common land use classification approaches use colour at a first stage, followed by texture analysis, particularly for the evaluation of landscape patterns. Unfortunately RGB-based classifications are significantly influenced by image setting, as contrast, saturation, and brightness, and by the presence of shadows in the scene. The classification methods analysed in this work aim to mitigate these effects. The procedures developed considered the use of invariant colour components, image resampling, and the evaluation of a RGB texture parameter for various increasing sizes of a structuring element. <br><br> To identify the most efficient solution, the classification vectors obtained were then processed by a K-means unsupervised classifier using different metrics, and the results were compared with respect to corresponding user supervised classifications. <br><br> The experiments performed and discussed in the paper let us evaluate the effective contribution of texture information, and compare the most suitable vector components and metrics for automatic classification of very high resolution RGB aerial images.


Author(s):  
A. Movia ◽  
A. Beinat ◽  
T. Sandri

Very high resolution (VHR) aerial images can provide detailed analysis about landscape and environment; nowadays, thanks to the rapid growing airborne data acquisition technology an increasing number of high resolution datasets are freely available. &lt;br&gt;&lt;br&gt; In a VHR image the essential information is contained in the red-green-blue colour components (RGB) and in the texture, therefore a preliminary step in image analysis concerns the classification in order to detect pixels having similar characteristics and to group them in distinct classes. Common land use classification approaches use colour at a first stage, followed by texture analysis, particularly for the evaluation of landscape patterns. Unfortunately RGB-based classifications are significantly influenced by image setting, as contrast, saturation, and brightness, and by the presence of shadows in the scene. The classification methods analysed in this work aim to mitigate these effects. The procedures developed considered the use of invariant colour components, image resampling, and the evaluation of a RGB texture parameter for various increasing sizes of a structuring element. &lt;br&gt;&lt;br&gt; To identify the most efficient solution, the classification vectors obtained were then processed by a K-means unsupervised classifier using different metrics, and the results were compared with respect to corresponding user supervised classifications. &lt;br&gt;&lt;br&gt; The experiments performed and discussed in the paper let us evaluate the effective contribution of texture information, and compare the most suitable vector components and metrics for automatic classification of very high resolution RGB aerial images.


2004 ◽  
Vol 34 (1) ◽  
pp. 37-52
Author(s):  
Wiktor Jassem ◽  
Waldemar Grygiel

The mid-frequencies and bandwidths of formants 1–5 were measured at targets, at plus 0.01 s and at minus 0.01 s off the targets of vowels in a 100-word list read by five male and five female speakers, for a total of 3390 10-variable spectrum specifications. Each of the six Polish vowel phonemes was represented approximately the same number of times. The 3390* 10 original-data matrix was processed by probabilistic neural networks to produce a classification of the spectra with respect to (a) vowel phoneme, (b) identity of the speaker, and (c) speaker gender. For (a) and (b), networks with added input information from another independent variable were also used, as well as matrices of the numerical data appropriately normalized. Mean scores for classification with respect to phonemes in a multi-speaker design in the testing sets were around 95%, and mean speaker-dependent scores for the phonemes varied between 86% and 100%, with two speakers scoring 100% correct. The individual voices were identified between 95% and 96% of the time, and classifications of the spectra for speaker gender were practically 100% correct.


Author(s):  
Tamara Vieira Pascoto ◽  
Simone Andrea Furegatti ◽  
Anna Silvia Palcheco Peixoto

There are several factors that directly or indirectly influence erosion processes. In order to properly understand the behavior of these processes, some factors need to be analyzed together. Determining them wrongly can compromise the study resulting in wrong actions. For this reason, methodologies are always sought to measure them quantitatively and qualitatively in the most accurate possible way. Land use is one of the main factors liable to inaccuracies in its determination. To use this parameter in mapping erosive processes, researchers need to delimit it, classify it, and measure it. In order to better understand the complexity of considering this parameter, the present study analyzed an erosive feature that, although stabilized, has a component in constant development. Initially, a visual analysis indicated the same classification of land use for both conditions, despite having different behaviors, leading to the need for a detailed analysis. Such analysis comprised a historical survey through aerial photos and interviews with residents and employees of the city hall about the evolution of the feature from 2008 to 2019. It also included the analysis of other influencing factors that could be responsible for this difference in behavior in the area. Two different traces of the contribution areas of the gully and branch were also considered. One considering only aerial images, and the other considering the knowledge acquired during the research about the evolution of the feature. It was concluded, then, that an analysis of the use-only occupation factor based on aerial images can accentuate the inaccuracy of the measurement of this factor.


Sign in / Sign up

Export Citation Format

Share Document