3D Reconstruction of Building Based on High-Resolution SAR and Optical Images

Author(s):  
Z. Junjie ◽  
D. Chibiao ◽  
Y. Hongjian ◽  
X. Minghong
2021 ◽  
Author(s):  
Karl‐Heinz Herrmann ◽  
Franziska Hoffmann ◽  
Günther Ernst ◽  
David Pertzborn ◽  
Daniela Pelzel ◽  
...  

2021 ◽  
Vol 13 (11) ◽  
pp. 2185
Author(s):  
Yu Tao ◽  
Sylvain Douté ◽  
Jan-Peter Muller ◽  
Susan J. Conway ◽  
Nicolas Thomas ◽  
...  

We introduce a novel ultra-high-resolution Digital Terrain Model (DTM) processing system using a combination of photogrammetric 3D reconstruction, image co-registration, image super-resolution restoration, shape-from-shading DTM refinement, and 3D co-alignment methods. Technical details of the method are described, and results are demonstrated using a 4 m/pixel Trace Gas Orbiter Colour and Stereo Surface Imaging System (CaSSIS) panchromatic image and an overlapping 6 m/pixel Mars Reconnaissance Orbiter Context Camera (CTX) stereo pair to produce a 1 m/pixel CaSSIS Super-Resolution Restoration (SRR) DTM for different areas over Oxia Planum on Mars—the future ESA ExoMars 2022 Rosalind Franklin rover’s landing site. Quantitative assessments are made using profile measurements and the counting of resolvable craters, in comparison with the publicly available 1 m/pixel High-Resolution Imaging Experiment (HiRISE) DTM. These assessments demonstrate that the final resultant 1 m/pixel CaSSIS DTM from the proposed processing system has achieved comparable and sometimes more detailed 3D reconstruction compared to the overlapping HiRISE DTM.


2011 ◽  
Vol 17 (S2) ◽  
pp. 966-967 ◽  
Author(s):  
R Schalek ◽  
N Kasthuri ◽  
K Hayworth ◽  
D Berger ◽  
J Tapia ◽  
...  

Extended abstract of a paper presented at Microscopy and Microanalysis 2011 in Nashville, Tennessee, USA, August 7–August 11, 2011.


Author(s):  
J. Fagir ◽  
A. Schubert ◽  
M. Frioud ◽  
D. Henke

The fusion of synthetic aperture radar (SAR) and optical data is a dynamic research area, but image segmentation is rarely treated. While a few studies use low-resolution nadir-view optical images, we approached the segmentation of SAR and optical images acquired from the same airborne platform – leading to an oblique view with high resolution and thus increased complexity. To overcome the geometric differences, we generated a digital surface model (DSM) from adjacent optical images and used it to project both the DSM and SAR data into the optical camera frame, followed by segmentation with each channel. The fused segmentation algorithm was found to out-perform the single-channel version.


2019 ◽  
Vol 11 (13) ◽  
pp. 1619 ◽  
Author(s):  
Zhou Ya’nan ◽  
Luo Jiancheng ◽  
Feng Li ◽  
Zhou Xiaocheng

Spatial features retrieved from satellite data play an important role for improving crop classification. In this study, we proposed a deep-learning-based time-series analysis method to extract and organize spatial features to improve parcel-based crop classification using high-resolution optical images and multi-temporal synthetic aperture radar (SAR) data. Central to this method is the use of multiple deep convolutional networks (DCNs) to extract spatial features and to use the long short-term memory (LSTM) network to organize spatial features. First, a precise farmland parcel map was delineated from optical images. Second, hundreds of spatial features were retrieved using multiple DCNs from preprocessed SAR images and overlaid onto the parcel map to construct multivariate time-series of crop growth for parcels. Third, LSTM-based network structures for organizing these time-series features were constructed to produce a final parcel-based classification map. The method was applied to a dataset of high-resolution ZY-3 optical images and multi-temporal Sentinel-1A SAR data to classify crop types in the Hunan Province of China. The classification results, showing an improvement of greater than 5.0% in overall accuracy relative to methods without spatial features, demonstrated the effectiveness of the proposed method in extracting and organizing spatial features for improving parcel-based crop classification.


Author(s):  
Balnarsaiah Battula ◽  
Laxminarayana Parayitam ◽  
T. S. Prasad ◽  
Penta Balakrishna ◽  
Chandrasekhar Patibandla

2000 ◽  
Vol 6 (S2) ◽  
pp. 1148-1149
Author(s):  
U. Ziese ◽  
A.H. Janssen ◽  
T.P. van der Krift ◽  
A.G. van Balen ◽  
W.J. de Ruijter ◽  
...  

Electron tomography is a three-dimensional (3D) imaging method with transmission electron microscopy (TEM) that provides high-resolution 3D images of structural arrangements. Conventional TEM images are in first approximation mere 2D-projections of a 3D sample under investigation. With electron tomographya series of images is acquired of a sample that is tilted over a large angular range (±70°) with small angular tilt increments (so called tilt-series). For the subsequent 3D-reconstruction, the images of the tilt series are aligned relative to each other and the 3D-reconstruction is computed. Electron tomography is the only technique that can provide true 3D information with nm-scale resolution of individual and unique samples. For (cell) biology and material science applications the availability of high-resolution 3D images of structural arrangements within individual samples provides unique architectural information that cannot be obtained otherwise. Routine application of electron tomography will comprise a major revolutionary step forward in the characterization of complex materials and cellular arrangements.


2018 ◽  
Vol 10 (9) ◽  
pp. 1459 ◽  
Author(s):  
Ying Sun ◽  
Xinchang Zhang ◽  
Xiaoyang Zhao ◽  
Qinchuan Xin

Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results in biased building boundary extraction due to tree and background mixtures. Although the classification methods can improve this efficiently by separating buildings from other objects, there are often ineluctable salt and pepper artifacts. In this paper, we combine the robust classification convolutional neural networks (CNN) and ACM to overcome the current limitations in algorithms for building boundary extraction. We conduct two types of experiments: the first integrates ACM into the CNN construction progress, whereas the second starts building footprint detection with a CNN and then uses ACM for post processing. Three level assessments conducted demonstrate that the proposed methods could efficiently extract building boundaries in five test scenes from two datasets. The achieved mean accuracies in terms of the F1 score for the first type (and the second type) of the experiment are 96.43 ± 3.34% (95.68 ± 3.22%), 88.60 ± 3.99% (89.06 ± 3.96%), and 91.62 ±1.61% (91.47 ± 2.58%) at the scene, object, and pixel levels, respectively. The combined CNN and ACM solutions were shown to be effective at extracting building boundaries from high-resolution optical images and LiDAR data.


Sign in / Sign up

Export Citation Format

Share Document