scholarly journals Evaluation of Automatic Building Detection Approaches Combining High Resolution Images and LiDAR Data

2011 ◽  
Vol 3 (6) ◽  
pp. 1188-1210 ◽  
Author(s):  
Txomin Hermosilla ◽  
Luis A. Ruiz ◽  
Jorge A. Recio ◽  
Javier Estornell
2018 ◽  
Vol 10 (9) ◽  
pp. 1349 ◽  
Author(s):  
Hui Luo ◽  
Le Wang ◽  
Chen Wu ◽  
Lei Zhang

Impervious surface mapping incorporating high-resolution remote sensing imagery has continued to attract increasing interest, as it can provide detailed information about urban structure and distribution. Previous studies have suggested that the combination of LiDAR data and high-resolution imagery for impervious surface mapping yields better performance than the use of high-resolution imagery alone. However, due to LiDAR data’s high cost of acquisition, it is difficult to obtain LiDAR data that was acquired at the same time as the high-resolution imagery in order to conduct impervious surface mapping by multi-sensor remote sensing data. Consequently, the occurrence of real landscape changes between multi-sensor remote sensing data sets with different acquisition times results in misclassification errors in impervious surface mapping. This issue has generally been neglected in previous works. Furthermore, observation differences that were generated from multi-sensor data—including the problems of misregistration, missing data in LiDAR data, and shadow in high-resolution images—also present obstacles to achieving the final mapping result in the fusion of LiDAR data and high-resolution images. In order to resolve these issues, we propose an improved impervious surface-mapping method incorporating both LiDAR data and high-resolution imagery with different acquisition times that consider real landscape changes and observation differences. In the proposed method, multi-sensor change detection by supervised multivariate alteration detection (MAD) is employed to identify the changed areas and mis-registered areas. The no-data areas in the LiDAR data and the shadow areas in the high-resolution image are extracted via independent classification based on the corresponding single-sensor data. Finally, an object-based post-classification fusion is proposed that takes advantage of both independent classification results while using single-sensor data and the joint classification result using stacked multi-sensor data. The impervious surface map is subsequently obtained by combining the landscape classes in the accurate classification map. Experiments covering the study site in Buffalo, NY, USA demonstrate that our method can accurately detect landscape changes and unambiguously improve the performance of impervious surface mapping.


2015 ◽  
Vol 53 (1) ◽  
pp. 45-62 ◽  
Author(s):  
Maryam Teimouri ◽  
Mehdi Mokhtarzade ◽  
Mohammad Javad Valadan Zoej

Author(s):  
Hui Luo ◽  
Le Wang ◽  
Chen Wu ◽  
Lei Zhang

Impervious surface mapping with high-resolution remote sensing imagery has attracted increasing interest as it can provide detailed information for urban structure and distribution. Previous studies have suggested that the combination of LiDAR data and high-resolution imagery for impervious surface mapping performs better than using only high-resolution imagery. However, due to the high cost of the acquisition of LiDAR data, it is difficult to obtain the multi-sensor remote sensing data acquired at the same acquisition time for impervious surface mapping. Consequently, real landscape changes between multi-sensor remote sensing data at different acquisition times would lead to the error of misclassification in impervious surface mapping. This issue has mostly been neglected in previous works. Furthermore, the observation differences generated from multi-sensor data, including the problems of misregistration, missing data in LiDAR data, and shadow in high-resolution images would also challenge the final mapping result in the fusion of LiDAR data and high-resolution images. In order to conquer these problems, we propose an improved impervious surface mapping method incorporating both LiDAR data and high-resolution imagery at different acquisition times in consideration of real landscape changes and observation differences. In the proposed method, a multi-sensor change detection by supervised multivariate alteration detection is employed to obtain changed areas and misregistration areas. The no-data areas in the LiDAR data and the shadow areas in the high-resolution imagery are extracted by independent classification yielded by its corresponding single sensor data. Finally, an object-based post-classification fusion is proposed to take advantage of independent classification results with single-sensor data and the joint classification result with stacked multi-sensor data. Experiments covering the study site in Buffalo, NY, USA demonstrate that our method can accurately detect landscape changes and obviously improve the performance of impervious surface mapping.


2021 ◽  
Vol 13 (18) ◽  
pp. 3660
Author(s):  
Sejung Jung ◽  
Won Hee Lee ◽  
Youkyung Han

Building change detection is a critical field for monitoring artificial structures using high-resolution multitemporal images. However, relief displacement depending on the azimuth and elevation angles of the sensor causes numerous false alarms and misdetections of building changes. Therefore, this study proposes an effective object-based building change detection method that considers azimuth and elevation angles of sensors in high-resolution images. To this end, segmentation images were generated using a multiresolution technique from high-resolution images after which object-based building detection was performed. For detecting building candidates, we calculated feature information that could describe building objects, such as rectangular fit, gray-level co-occurrence matrix (GLCM) homogeneity, and area. Final building detection was then performed considering the location relationship between building objects and their shadows using the Sun’s azimuth angle. Subsequently, building change detection of final building objects was performed based on three methods considering the relationship of the building object properties between the images. First, only overlaying objects between images were considered to detect changes. Second, the size difference between objects according to the sensor’s elevation angle was considered to detect the building changes. Third, the direction between objects according to the sensor’s azimuth angle was analyzed to identify the building changes. To confirm the effectiveness of the proposed object-based building change detection performance, two building density areas were selected as study sites. Site 1 was constructed using a single sensor of KOMPSAT-3 bitemporal images, whereas Site 2 consisted of multi-sensor images of KOMPSAT-3 and unmanned aerial vehicle (UAV). The results from both sites revealed that considering additional shadow information showed more accurate building detection than using feature information only. Furthermore, the results of the three object-based change detections were compared and analyzed according to the characteristics of the study area and the sensors. Accuracy of the proposed object-based change detection results was achieved over the existing building detection methods.


Sign in / Sign up

Export Citation Format

Share Document