scholarly journals Extraction of Buildings from Multiple-View Aerial Images Using a Feature-Level-Fusion Strategy

2018 ◽  
Vol 10 (12) ◽  
pp. 1947 ◽  
Author(s):  
Youqiang Dong ◽  
Li Zhang ◽  
Ximin Cui ◽  
Haibin Ai ◽  
Biao Xu

Aerial images are widely used for building detection. However, the performance of building detection methods based on aerial images alone is typically poorer than that of building detection methods using both LiDAR and image data. To overcome these limitations, we present a framework for detecting and regularizing the boundary of individual buildings using a feature-level-fusion strategy based on features from dense image matching (DIM) point clouds, orthophoto and original aerial images. The proposed framework is divided into three stages. In the first stage, the features from the original aerial image and DIM points are fused to detect buildings and obtain the so-called blob of an individual building. Then, a feature-level fusion strategy is applied to match the straight-line segments from original aerial images so that the matched straight-line segment can be used in the later stage. Finally, a new footprint generation algorithm is proposed to generate the building footprint by combining the matched straight-line segments and the boundary of the blob of the individual building. The performance of our framework is evaluated on a vertical aerial image dataset (Vaihingen) and two oblique aerial image datasets (Potsdam and Lunen). The experimental results reveal 89% to 96% per-area completeness with accuracy above almost 93%. Relative to six existing methods, our proposed method not only is more robust but also can obtain a similar performance to the methods based on LiDAR and images.

Author(s):  
C. Chen ◽  
W. Gong ◽  
Y. Hu ◽  
Y. Chen ◽  
Y. Ding

The automated building detection in aerial images is a fundamental problem encountered in aerial and satellite images analysis. Recently, thanks to the advances in feature descriptions, Region-based CNN model (R-CNN) for object detection is receiving an increasing attention. Despite the excellent performance in object detection, it is problematic to directly leverage the features of R-CNN model for building detection in single aerial image. As we know, the single aerial image is in vertical view and the buildings possess significant directional feature. However, in R-CNN model, direction of the building is ignored and the detection results are represented by horizontal rectangles. For this reason, the detection results with horizontal rectangle cannot describe the building precisely. To address this problem, in this paper, we proposed a novel model with a key feature related to orientation, namely, Oriented R-CNN (OR-CNN). Our contributions are mainly in the following two aspects: 1) Introducing a new oriented layer network for detecting the rotation angle of building on the basis of the successful VGG-net R-CNN model; 2) the oriented rectangle is proposed to leverage the powerful R-CNN for remote-sensing building detection. In experiments, we establish a complete and bran-new data set for training our oriented R-CNN model and comprehensively evaluate the proposed method on a publicly available building detection data set. We demonstrate State-of-the-art results compared with the previous baseline methods.


— In present generation the detection of vehicle using aerial images plays an important role and mot challenging. The video understanding, border security are the applications of aerial images. To improve the performance of the system different detection methods are introduced. But these methods take more time in detection process. To overcome these convolutional neural network are introduced which will produce the successful design system. the main intent of this paper is to present the recognition system for aerial images using convolutional neural network. The proposed method improves the accuracy and speed after the detection process. At last aerial image is obtained by matching the image and textual description of classes.


Author(s):  
F. Alidoost ◽  
H. Arefi ◽  
F. Tombari

Abstract. Automatic detection and extraction of buildings from aerial images are considerable challenges in many applications, including disaster management, navigation, urbanization monitoring, emergency responses, 3D city mapping and reconstruction. However, the most important problem is to precisely localize buildings from single aerial images where there is no additional information such as LiDAR point cloud data or high resolution Digital Surface Models (DSMs). In this paper, a Deep Learning (DL)-based approach is proposed to localize buildings, estimate the relative height information, and extract the buildings’ boundaries using a single aerial image. In order to detect buildings and extract the bounding boxes, a Fully Connected Convolutional Neural Network (FC-CNN) is trained to classify building and non-building objects. We also introduced a novel Multi-Scale Convolutional-Deconvolutional Network (MS-CDN) including skip connection layers to predict normalized DSMs (nDSMs) from a single image. The extracted bounding boxes as well as predicted nDSMs are then employed by an Active Contour Model (ACM) to provide precise boundaries of buildings. The experiments show that, even having noises in the predicted nDSMs, the proposed method performs well on single aerial images with different building shapes. The quality rate for building detection is about 86% and the RMSE for nDSM prediction is about 4 m. Also, the accuracy of boundary extraction is about 68%. Since the proposed framework is based on a single image, it could be employed for real time applications.


2020 ◽  
Vol 12 (9) ◽  
pp. 1404
Author(s):  
Saleh Javadi ◽  
Mattias Dahl ◽  
Mats I. Pettersson

Interest in aerial image analysis has increased owing to recent developments in and availability of aerial imaging technologies, like unmanned aerial vehicles (UAVs), as well as a growing need for autonomous surveillance systems. Variant illumination, intensity noise, and different viewpoints are among the main challenges to overcome in order to determine changes in aerial images. In this paper, we present a robust method for change detection in aerial images. To accomplish this, the method extracts three-dimensional (3D) features for segmentation of objects above a defined reference surface at each instant. The acquired 3D feature maps, with two measurements, are then used to determine changes in a scene over time. In addition, the important parameters that affect measurement, such as the camera’s sampling rate, image resolution, the height of the drone, and the pixel’s height information, are investigated through a mathematical model. To exhibit its applicability, the proposed method has been evaluated on aerial images of various real-world locations and the results are promising. The performance indicates the robustness of the method in addressing the problems of conventional change detection methods, such as intensity differences and shadows.


2013 ◽  
Vol 734-737 ◽  
pp. 3079-3084
Author(s):  
Yin Wen Dong ◽  
Luan Wan ◽  
Zhao Ming Shi ◽  
Jing Xin An

Aiming at anhydrous bridge automatically identification in aerial images, an anhydrous bridge recognition algorithm based on the geometric characteristics is proposed. Firstly, the original image is do threshold segmentation to get binary image. Secondly, binary image is do morphological processed to get bridge area enhanced image and bridge area corrosion image, and these two bridge area are subtracted to extract suspected bridge area based on bridge rectangle feature. Finally, bridge regional area is positioned according to the straight-line characteristics of the bridge. Experimental results show the proposed algorithm can accurately identify the anhydrous bridge effectively. Key words: aerial image; anhydrous bridges identification; edge detection ; straight line extraction ; geometric features


2014 ◽  
Vol 23 (05) ◽  
pp. 1450071
Author(s):  
DONG-CHUL PARK ◽  
DONG-MIN WOO ◽  
CHANG-SUN KIM ◽  
SOO-YOUNG MIN

An efficient method for rectangular boundary extraction from aerial image data is proposed in this paper. A centroid neural network (CNN) with a metric utilizing line segments is adopted to group low-level line segments for detecting rectangular objects. The proposed method extracts rectangular boundaries for building rooftops from 3D edge images with various types of noises arising from the stereo matching process. In order to overcome the noises in 3D edge images including line segments of shadows, a clustering method utilizes the constraint where the heights of building rooftops are similar and the clustering process is performed with candidate 3D line segments with similar heights. Experiments are performed with a set of high-resolution satellite image data. The results show that the proposed method can remove noisy segments including shadow lines efficiently and thus find more accurate rectangular boundaries and building information.


Author(s):  
L. Hoegner ◽  
S. Tuttas ◽  
Y. Xu ◽  
K. Eder ◽  
U. Stilla

This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.


Author(s):  
A. Gruen ◽  
S. Schubiger ◽  
R. Qin ◽  
G. Schrotter ◽  
B. Xiong ◽  
...  

<p><strong>Abstract.</strong> This paper reports about an effort to generate LoD3 models of buildings semi-automatically, with the highest possible level of automation. It is work in progress. We use multi-sensor data like aerial images from a 5-head camera with a GSD of 10&amp;thinsp;cm, UAV images, and aerial and mobile LiDAR point clouds. We distinguish two cases: LoD2 models are available and they are not. We apply Multi-Photo Geometrically Constrained Least Squares Matching for different kind of point measurements. The regularity of many building fa&amp;ccedil;ades in Singapore leads us to the idea to generalize the measurement procedure towards using measurement macros (geometrical primitives, i.e. windows, doors, etc.) and combine reality-based with procedural modelling. In parallel we try to model these fa&amp;ccedil;ade elements from LiDAR point cloud data. In another research line we do building detection by a novel approach to land-cover classification, incorporating features of the fa&amp;ccedil;ades to improve the classification accuracy. To generate the semantic labels of the fa&amp;ccedil;ades, we developed a spatially unrelated mean-shift clustering method to yield structurally confined segments. It is the characteristic of automated and even semi-automated procedures that the results need some amount of editing. We therefore work on interactive post-editing approaches on CityGML building models containing semantic information of each surface. Maintaining the semantic information throughout the editing process is essential but often lack the support from current tools. Accordingly, we implement a method to synchronize CityGML models. Overall this project consists of a great number of different algorithmic components, which can only be coarsely explained in this paper.</p>


Sign in / Sign up

Export Citation Format

Share Document