A NOVEL BUILDING BOUNDARY EXTRACTION METHOD FOR HIGH-RESOLUTION AERIAL IMAGE

2014 ◽  
Vol 1 (2) ◽  
pp. 19-22
Author(s):  
Ji-hui TU
2010 ◽  
Vol 39 (5) ◽  
pp. 951-955
Author(s):  
曾峦 ZENG Luan ◽  
赵忠文 ZHAO Zhong-wen ◽  
谭久彬 TAN Jiu-bin

2020 ◽  
Vol 9 (2) ◽  
pp. 109 ◽  
Author(s):  
Bo Cheng ◽  
Shiai Cui ◽  
Xiaoxiao Ma ◽  
Chenbin Liang

Feature extraction of an urban area is one of the most important directions of polarimetric synthetic aperture radar (PolSAR) applications. A high-resolution PolSAR image has the characteristics of high dimensions and nonlinearity. Therefore, to find intrinsic features for target recognition, a building area extraction method for PolSAR images based on the Adaptive Neighborhoods selection Neighborhood Preserving Embedding (ANSNPE) algorithm is proposed. First, 52 features are extracted by using the Gray level co-occurrence matrix (GLCM) and five polarization decomposition methods. The feature set is divided into 20 dimensions, 36 dimensions, and 52 dimensions. Next, the ANSNPE algorithm is applied to the training samples, and the projection matrix is obtained for the test image to extract the new features. Lastly, the Support Vector machine (SVM) classifier and post processing are used to extract the building area, and the accuracy is evaluated. Comparative experiments are conducted using Radarsat-2, and the results show that the ANSNPE algorithm could effectively extract the building area and that it had a better generalization ability; the projection matrix is obtained using the training data and could be directly applied to the new sample, and the building area extraction accuracy is above 80%. The combination of polarization and texture features provide a wealth of information that is more conducive to the extraction of building areas.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2671 ◽  
Author(s):  
Chunsheng Liu ◽  
Yu Guo ◽  
Shuang Li ◽  
Faliang Chang

You Only Look Once (YOLO) deep network can detect objects quickly with high precision and has been successfully applied in many detection problems. The main shortcoming of YOLO network is that YOLO network usually cannot achieve high precision when dealing with small-size object detection in high resolution images. To overcome this problem, we propose an effective region proposal extraction method for YOLO network to constitute an entire detection structure named ACF-PR-YOLO, and take the cyclist detection problem to show our methods. Instead of directly using the generated region proposals for classification or regression like most region proposal methods do, we generate large-size potential regions containing objects for the following deep network. The proposed ACF-PR-YOLO structure includes three main parts. Firstly, a region proposal extraction method based on aggregated channel feature (ACF) is proposed, called ACF based region proposal (ACF-PR) method. In ACF-PR, ACF is firstly utilized to fast extract candidates and then a bounding boxes merging and extending method is designed to merge the bounding boxes into correct region proposals for the following YOLO net. Secondly, we design suitable YOLO net for fine detection in the region proposals generated by ACF-PR. Lastly, we design a post-processing step, in which the results of YOLO net are mapped into the original image outputting the detection and localization results. Experiments performed on the Tsinghua-Daimler Cyclist Benchmark with high resolution images and complex scenes show that the proposed method outperforms the other tested representative detection methods in average precision, and that it outperforms YOLOv3 by 13.69 % average precision and outperforms SSD by 25.27 % average precision.


2018 ◽  
Vol 10 (9) ◽  
pp. 1459 ◽  
Author(s):  
Ying Sun ◽  
Xinchang Zhang ◽  
Xiaoyang Zhao ◽  
Qinchuan Xin

Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results in biased building boundary extraction due to tree and background mixtures. Although the classification methods can improve this efficiently by separating buildings from other objects, there are often ineluctable salt and pepper artifacts. In this paper, we combine the robust classification convolutional neural networks (CNN) and ACM to overcome the current limitations in algorithms for building boundary extraction. We conduct two types of experiments: the first integrates ACM into the CNN construction progress, whereas the second starts building footprint detection with a CNN and then uses ACM for post processing. Three level assessments conducted demonstrate that the proposed methods could efficiently extract building boundaries in five test scenes from two datasets. The achieved mean accuracies in terms of the F1 score for the first type (and the second type) of the experiment are 96.43 ± 3.34% (95.68 ± 3.22%), 88.60 ± 3.99% (89.06 ± 3.96%), and 91.62 ±1.61% (91.47 ± 2.58%) at the scene, object, and pixel levels, respectively. The combined CNN and ACM solutions were shown to be effective at extracting building boundaries from high-resolution optical images and LiDAR data.


Sign in / Sign up

Export Citation Format

Share Document