Dynamic multi-sensor platform for efficient 3D-digitalization of cities: part II

Author(s):  
Angel-Ivan Garcia-Moreno

Abstract The digitization of geographic environments, such as cities and archaeological sites, is of priority interest to the scientific community due to its potential applications. But there are still several issues to address. There are various digitization strategies, which include terrestrial/ airborne platforms and composed of various sensors, among the most common, cameras and laser scanners. A comprehensive methodology is presented to reconstruct urban environments using a mobile land platform. All the implemented stages are described, which includes the acquisition, processing, and correlation of the data delivered by a Velodyne HDL-64E scanner, a spherical camera, GPS, and inertial systems. The process to merge several point clouds to build a large-scale map is described, as well as the generation of surfaces. Being able to render large urban areas using a low density of points but without losing the details of the structures within the urban scenes. The proposal is evaluated using several metrics, for example, Coverage and Root-Mean-Square-Error (RSME). The results are compared against 3 methodologies reported in the literature. Obtaining better results in the 2D/3D data fusion process and the generation of surfaces. The described method has a low RMSE (0.79) compared to the other methods and a runtime of approximately 40 seconds to process each data set (point cloud, panoramic image, and inertial data). In general, the proposed methodology shows a more homogeneous density distribution without losing the details, that is, it conserves the spatial distribution of the points, but with fewer data.

Author(s):  
J. Schachtschneider ◽  
C. Brenner

Abstract. The development of automated and autonomous vehicles requires highly accurate long-term maps of the environment. Urban areas contain a large number of dynamic objects which change over time. Since a permanent observation of the environment is impossible and there will always be a first time visit of an unknown or changed area, a map of an urban environment needs to model such dynamics.In this work, we use LiDAR point clouds from a large long term measurement campaign to investigate temporal changes. The data set was recorded along a 20 km route in Hannover, Germany with a Mobile Mapping System over a period of one year in bi-weekly measurements. The data set covers a variety of different urban objects and areas, weather conditions and seasons. Based on this data set, we show how scene and seasonal effects influence the measurement likelihood, and that multi-temporal maps lead to the best positioning results.


Author(s):  
J. Gehrung ◽  
M. Hebel ◽  
M. Arens ◽  
U. Stilla

Mobile laser scanning has not only the potential to create detailed representations of urban environments, but also to determine changes up to a very detailed level. An environment representation for change detection in large scale urban environments based on point clouds has drawbacks in terms of memory scalability. Volumes, however, are a promising building block for memory efficient change detection methods. The challenge of working with 3D occupancy grids is that the usual raycasting-based methods applied for their generation lead to artifacts caused by the traversal of unfavorable discretized space. These artifacts have the potential to distort the state of voxels in close proximity to planar structures. In this work we propose a raycasting approach that utilizes knowledge about planar surfaces to completely prevent this kind of artifacts. To demonstrate the capabilities of our approach, a method for the iterative volumetric approximation of point clouds that allows to speed up the raycasting by 36 percent is proposed.


Author(s):  
X.-F. Xing ◽  
M. A. Mostafavi ◽  
G. Edwards ◽  
N. Sabo

<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>


Author(s):  
G. G. Pessoa ◽  
R. C. Santos ◽  
A. C. Carrilho ◽  
M. Galo ◽  
A. Amorim

<p><strong>Abstract.</strong> Images and LiDAR point clouds are the two major data sources used by the photogrammetry and remote sensing community. Although different, the synergy between these two data sources has motivated exploration of the potential for combining data in various applications, especially for classification and extraction of information in urban environments. Despite the efforts of the scientific community, integrating LiDAR data and images remains a challenging task. For this reason, the development of Unmanned Aerial Vehicles (UAVs) along with the integration and synchronization of positioning receivers, inertial systems and off-the-shelf imaging sensors has enabled the exploitation of the high-density photogrammetric point cloud (PPC) as an alternative, obviating the need to integrate LiDAR and optical images. This study therefore aims to compare the results of PPC classification in urban scenes considering radiometric-only, geometric-only and combined radiometric and geometric data applied to the Random Forest algorithm. For this study the following classes were considered: buildings, asphalt, trees, grass, bare soil, sidewalks and power lines, which encompass the most common objects in urban scenes. The classification procedure was performed considering radiometric features (Green band, Red band, NIR band, NDVI and Saturation) and geometric features (Height – nDSM, Linearity, Planarity, Scatter, Anisotropy, Omnivariance and Eigenentropy). The quantitative analyses were performed by means of the classification error matrix using the following metrics: overall accuracy, recall and precision. The quantitative analyses present overall accuracy of 0.80, 0.74 and 0.98 for classification considering radiometric, geometric and both data combined, respectively.</p>


Author(s):  
Han Hu ◽  
Chongtai Chen ◽  
Bo Wu ◽  
Xiaoxia Yang ◽  
Qing Zhu ◽  
...  

Textureless and geometric discontinuities are major problems in state-of-the-art dense image matching methods, as they can cause visually significant noise and the loss of sharp features. Binary census transform is one of the best matching cost methods but in textureless areas, where the intensity values are similar, it suffers from small random noises. Global optimization for disparity computation is inherently sensitive to parameter tuning in complex urban scenes, and must compromise between smoothness and discontinuities. The aim of this study is to provide a method to overcome these issues in dense image matching, by extending the industry proven Semi-Global Matching through 1) developing a ternary census transform, which takes three outputs in a single order comparison and encodes the results in two bits rather than one, and also 2) by using texture-information to self-tune the parameters, which both preserves sharp edges and enforces smoothness when necessary. Experimental results using various datasets from different platforms have shown that the visual qualities of the triangulated point clouds in urban areas can be largely improved by these proposed methods.


2019 ◽  
pp. 1049-1070
Author(s):  
Fabian Neuhaus

User data created in the digital context has increasingly been of interest to analysis and spatial analysis in particular. Large scale computer user management systems such as digital ticketing and social networking are creating vast amount of data. Such data systems can contain information generated by potentially millions of individuals. This kind of data has been termed big data. The analysis of big data can in its spatial but also in a temporal and social nature be of much interest for analysis in the context of cities and urban areas. This chapter discusses this potential along with a selection of sample work and an in-depth case study. Hereby the focus is mainly on the use and employment of insight gained from social media data, especially the Twitter platform, in regards to cities and urban environments. The first part of the chapter discusses a range of examples that make use of big data and the mapping of digital social network data. The second part discusses the way the data is collected and processed. An important section is dedicated to the aspects of ethical considerations. A summary and an outlook are discussed at the end.


Electronics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 43 ◽  
Author(s):  
Rendong Wang ◽  
Youchun Xu ◽  
Miguel Angel Sotelo ◽  
Yulin Ma ◽  
Thompson Sarkodie-Gyan ◽  
...  

The registration of point clouds in urban environments faces problems such as dynamic vehicles and pedestrians, changeable road environments, and GPS inaccuracies. The state-of-the-art methodologies have usually combined the dynamic object tracking and/or static feature extraction data into a point cloud towards the solution of these problems. However, there is the occurrence of minor initial position errors due to these methodologies. In this paper, the authors propose a fast and robust registration method that exhibits no need for the detection of any dynamic and/or static objects. This proposed methodology may be able to adapt to higher initial errors. The initial steps of this methodology involved the optimization of the object segmentation under the application of a series of constraints. Based on this algorithm, a novel multi-layer nested RANSAC algorithmic framework is proposed to iteratively update the registration results. The robustness and efficiency of this algorithm is demonstrated on several high dynamic scenes of both short and long time intervals with varying initial offsets. A LiDAR odometry experiment was performed on the KITTI data set and our extracted urban data-set with a high dynamic urban road, and the average of the horizontal position errors was compared to the distance traveled that resulted in 0.45% and 0.55% respectively.


2020 ◽  
Vol 12 (11) ◽  
pp. 1875 ◽  
Author(s):  
Jingwei Zhu ◽  
Joachim Gehrung ◽  
Rong Huang ◽  
Björn Borgmann ◽  
Zhenghao Sun ◽  
...  

In the past decade, a vast amount of strategies, methods, and algorithms have been developed to explore the semantic interpretation of 3D point clouds for extracting desirable information. To assess the performance of the developed algorithms or methods, public standard benchmark datasets should invariably be introduced and used, which serve as an indicator and ruler in the evaluation and comparison. In this work, we introduce and present large-scale Mobile LiDAR point clouds acquired at the city campus of the Technical University of Munich, which have been manually annotated and can be used for the evaluation of related algorithms and methods for semantic point cloud interpretation. We created three datasets from a measurement campaign conducted in April 2016, including a benchmark dataset for semantic labeling, test data for instance segmentation, and test data for annotated single 360 ° laser scans. These datasets cover an urban area of approximately 1 km long roadways and include more than 40 million annotated points with eight classes of objects labeled. Moreover, experiments were carried out with results from several baseline methods compared and analyzed, revealing the quality of this dataset and its effectiveness when using it for performance evaluation.


2007 ◽  
Vol 7 (6) ◽  
pp. 1657-1670 ◽  
Author(s):  
B. Guinot ◽  
H. Cachier ◽  
K. Oikonomou

Abstract. The aerosol chemical mass closure is revisited and a simple and inexpensive methodology is proposed. This methodology relies on data obtained for aerosol mass, and concentration of the major ions and the two main carbon components, the organic carbon (OC) and the black carbon (BC). Atmospheric particles are separated into coarse (AD>2 μm) and fine (AD<2 μm) fractions and are treated separately. For the coarse fraction the carbonaceous component is minor and assumption is made for the conversion factor k of OC-to-POM (Particulate Organic Matter) which is fixed to the value of 1.8 accounting for secondary species. The coarse soluble calcium is shown to display a correlation (regression coefficient f, y axis intercept b) with the missing mass. Conversely, the fine fraction is dominated by organic species and assumption is made for dust which is assumed to have the same f factor as the coarse mode dust. The fine mode mass obtained from chemical analyses is then adjusted to the actual weighed mass by tuning the k conversion factor. The k coefficient is kept different in the two modes due to the expected different origins of the organic particles. Using the f and k coefficient obtained from the data set, the mass closure is reached for each individual sample with an undetermined fraction less than 10%. The procedure has been applied to different urban and peri-urban environments in Europe and in Beijing and its efficiency and uncertainties on f and k values are discussed. The f and k coefficients are shown to offer consistent geochemical indications on aerosol origin and transformations. f allows to retrieve dust mass and its value accounting for Ca abundance in dust at the site of investigation may serve as an indicator of dust origin and aerosol interactions with anthropogenic acids. f values were found to vary in the 0.08–0.12 range in European urban areas, and a broader range in Beijing (0.01–0.16). As expected, k appears to be a relevant proxy for particle origin and ageing and varies in the 1.4–1.8 range. For Beijing, k exhibits high values of about 1.7 in winter and summer. Winter values suggest that fresh coal aerosol might be responsible for such a high k value, which was not taken into account in previous works.


Author(s):  
A. Georgopoulos ◽  
C. Oikonomou ◽  
E. Adamopoulos ◽  
E. K. Stathopoulou

When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.


Sign in / Sign up

Export Citation Format

Share Document