scholarly journals Improving trajectory estimation using 3D city models and kinematic point clouds

2021 ◽  
Author(s):  
Lukas Lucks ◽  
Lasse Klingbeil ◽  
Lutz Plümer ◽  
Youness Dehbi
Author(s):  
O. Wysocki ◽  
B. Schwab ◽  
L. Hoegner ◽  
T. H. Kolbe ◽  
U. Stilla

Abstract. Nowadays, the number of connected devices providing unstructured data is rapidly rising. These devices acquire data with a temporal and spatial resolution at an unprecedented level creating an influx of geoinformation which, however, lacks semantic information. Simultaneously, structured datasets like semantic 3D city models are widely available and assure rich semantics and high global accuracy but are represented by rather coarse geometries. While the mentioned downsides curb the usability of these data types for nowadays’ applications, the fusion of both shall maximize their potential. Since testing and developing automated driving functions stands at the forefront of the challenges, we propose a pipeline fusing structured (CityGML and HD Map datasets) and unstructured datasets (MLS point clouds) to maximize their advantages in the automatic 3D road space models reconstruction domain. The pipeline is a parameterized end-to-end solution that integrates segmentation, reconstruction, and modeling tasks while ensuring geometric and semantic validity of models. Firstly, the segmentation of point clouds is supported by the transfer of semantics from a structured to an unstructured dataset. The distinction between horizontal- and vertical-like point cloud subsets enforces a further segmentation or an immediate refinement while only adequately depicted models by point clouds are allowed. Then, based on the classified and filtered point clouds the input 3D model geometries are refined. Building upon the refinement, the semantic enrichment of the 3D models is presented. The deployment of a simulation engine for automated driving research and a city model database tool underlines the versatility of possible application areas.


Author(s):  
G. Bitelli ◽  
V. A. Girelli ◽  
A. Lambertini

3D city models are becoming increasingly popular and important, because they constitute the base for all the visualization, planning, management operations regarding the urban infrastructure. These data are however not available in the majority of cities: in this paper, the possibility to use geospatial data of various kinds with the aim to generate 3D models in urban environment is investigated.<br> In 3D modelling works, the starting data are frequently the 3D point clouds, which are nowadays possible to collect by different sensors mounted on different platforms: LiDAR, imagery from satellite, airborne or unmanned aerial vehicles, mobile mapping systems that integrate several sensors. The processing of the acquired data and consequently the obtainability of models able to provide geometric accuracy and a good visual impact is limited by time, costs and logistic constraints.<br> Nowadays more and more innovative hardware and software solutions can offer to the municipalities and the public authorities the possibility to use available geospatial data, acquired for diverse aims, for the generation of 3D models of buildings and cities, characterized by different level of detail.<br> In the paper two cases of study are presented, both regarding surveys carried out in Emilia Romagna region, Italy, where 2D or 2.5D numerical maps are available. The first one is about the use of oblique aerial images realized by the Municipality for a systematic documentation of the built environment, the second concerns the use of LiDAR data acquired for other purposes; in the two tests, these data were used in conjunction with large scale numerical maps to produce 3D city models.


Author(s):  
J. Yan ◽  
S. Zlatanova ◽  
M. Aleksandrov ◽  
A. A. Diakite ◽  
C. Pettit

<p><strong>Abstract.</strong> 3D modelling of precincts and cities has significantly advanced in the last decades, as we move towards the concept of the Digital Twin. Many 3D city models have been created but a large portion of them neglect representing terrain and buildings accurately. Very often the surface is either considered planar or is not represented. On the other hand, many Digital Terrain Models (DTM) have been created as 2.5D triangular irregular networks (TIN) or grids for different applications such as water management, sign of view or shadow computation, tourism, land planning, telecommunication, military operations and communications. 3D city models need to represent both the 3D objects and terrain in one consistent model, but still many challenges remain. A critical issue when integrating 3D objects and terrain is the identification of the valid intersection between 2.5D terrain and 3D objects. Commonly, 3D objects may partially float over or sink into the terrain; the depth of the underground parts might not be known; or the accuracy of data sets might be different. This paper discusses some of these issues and presents an approach for a consistent 3D reconstruction of LOD1 models on the basis of 3D point clouds, DTM, and 2D footprints of buildings. Such models are largely used for urban planning, city analytics or environmental analysis. The proposed method can be easily extended for higher LODs or BIM models.</p>


Author(s):  
C. Beil ◽  
T. Kutzner ◽  
B. Schwab ◽  
B. Willenborg ◽  
A. Gawronski ◽  
...  

Abstract. A range of different and increasingly accessible acquisition methods, the possibility for frequent data updates of large areas, and a simple data structure are some of the reasons for the popularity of three-dimensional (3D) point cloud data. While there are multiple techniques for segmenting and classifying point clouds, capabilities of common data formats such as LAS for providing semantic information are mostly limited to assigning points to a certain category (classification). However, several fields of application, such as digital urban twins used for simulations and analyses, require more detailed semantic knowledge. This can be provided by semantic 3D city models containing hierarchically structured semantic and spatial information. Although semantic models are often reconstructed from point clouds, they are usually geometrically less accurate due to generalization processes. First, point cloud data structures / formats are discussed with respect to their semantic capabilities. Then, a new approach for integrating point clouds with semantic 3D city models is presented, consequently combining respective advantages of both data types. In addition to elaborate (and established) semantic concepts for several thematic areas, the new version 3.0 of the international Open Geospatial Consortium (OGC) standard CityGML also provides a PointCloud module. In this paper a scheme is shown, how CityGML 3.0 can be used to provide semantic structures for point clouds (directly or stored in a separate LAS file). Methods and metrics to automatically assign points to corresponding Level of Detail (LoD)2 or LoD3 models are presented. Subsequently, dataset examples implementing these concepts are provided for download.


Author(s):  
O. Wysocki ◽  
Y. Xu ◽  
U. Stilla

Abstract. Throughout the years, semantic 3D city models have been created to depict 3D spatial phenomenon. Recently, an increasing number of mobile laser scanning (MLS) units yield terrestrial point clouds at an unprecedented level. Both dataset types often depict the same 3D spatial phenomenon differently, thus their fusion should increase the quality of the captured 3D spatial phenomenon. Yet, each dataset has modality-dependent uncertainties that hinder their immediate fusion. Therefore, we present a method for fusing MLS point clouds with semantic 3D building models while considering uncertainty issues. Specifically, we show MLS point clouds coregistration with semantic 3D building models based on expert confidence in evaluated metadata quantified by confidence interval (CI). This step leads to the dynamic adjustment of the CI, which is used to delineate matching bounds for both datasets. Both coregistration and matching steps serve as priors for a Bayesian network (BayNet) that performs application-dependent identity estimation. The BayNet propagates uncertainties and beliefs throughout the process to estimate end probabilities for confirmed, unmodeled, and other city objects. We conducted promising preliminary experiments on urban MLS and CityGML datasets. Our strategy sets up a framework for the fusion of MLS point clouds and semantic 3D building models. This framework aids the challenging parallel usage of such datasets in applications such as façade refinement or change detection. To further support this process, we open-sourced our implementation.


Solar Energy ◽  
2017 ◽  
Vol 146 ◽  
pp. 264-275 ◽  
Author(s):  
Laura Romero Rodríguez ◽  
Eric Duminil ◽  
José Sánchez Ramos ◽  
Ursula Eicker

2021 ◽  
Vol 86 ◽  
pp. 101584
Author(s):  
Ankit Palliwal ◽  
Shuang Song ◽  
Hugh Tiang Wah Tan ◽  
Filip Biljecki

Sign in / Sign up

Export Citation Format

Share Document