scholarly journals Transparent Collision Visualization of Point Clouds Acquired by Laser Scanning

2019 ◽  
Vol 8 (9) ◽  
pp. 425
Author(s):  
Weite Li ◽  
Kenya Shigeta ◽  
Kyoko Hasegawa ◽  
Liang Li ◽  
Keiji Yano ◽  
...  

In this paper, we propose a method to visualize large-scale colliding point clouds by highlighting their collision areas, and apply the method to visualization of collision simulation. Our method uses our recent work that achieved precise three-dimensional see-through imaging, i.e., transparent visualization, of large-scale point clouds that were acquired via laser scanning of three-dimensional objects. We apply the proposed collision visualization method to two applications: (1) The revival of the festival float procession of the Gion Festival, Kyoto city, Japan. The city government plans to revive the original procession route, which is narrow and not used at present. For the revival, it is important to know whether the festival floats would collide with houses, billboards, electric wires, or other objects along the original route. (2) Plant simulations based on laser-scanned datasets of existing and new facilities. The advantageous features of our method are the following: (1) A transparent visualization with a correct depth feel that is helpful to robustly determine the collision areas; (2) the ability to visualize high collision risk areas and real collision areas; and (3) the ability to highlight target visualized areas by increasing the corresponding point densities.

2019 ◽  
Vol 8 (8) ◽  
pp. 343 ◽  
Author(s):  
Li ◽  
Hasegawa ◽  
Nii ◽  
Tanaka

Digital archiving of three-dimensional cultural heritage assets has increased the demand for visualization of large-scale point clouds of cultural heritage assets acquired by laser scanning. We proposed a fused transparent visualization method that visualizes a point cloud of a cultural heritage asset in an environment using a photographic image as the background. We also proposed lightness adjustment and color enhancement methods to deal with the reduced visibility caused by the fused visualization. We applied the proposed method to a laser-scanned point cloud of a high-valued cultural festival float with complex inner and outer structures. Experimental results demonstrate that the proposed method enables high-quality transparent visualization of the cultural asset in its surrounding environment.


Author(s):  
W. Li ◽  
K. Shigeta ◽  
K. Hasegawa ◽  
L. Li ◽  
K. Yano ◽  
...  

Recently, laser-scanning technology, especially mobile mapping systems (MMSs), has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1) a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2) the ability to visualize areas of high collision risk as well as real collision areas, and (3) the ability to highlight target visualized areas by increasing the point densities there.


2018 ◽  
Vol 8 (2) ◽  
pp. 20170048 ◽  
Author(s):  
M. I. Disney ◽  
M. Boni Vicari ◽  
A. Burt ◽  
K. Calders ◽  
S. L. Lewis ◽  
...  

Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods.


2020 ◽  
Vol 161 ◽  
pp. 124-134 ◽  
Author(s):  
Tomomasa Uchida ◽  
Kyoko Hasegawa ◽  
Liang Li ◽  
Motoaki Adachi ◽  
Hiroshi Yamaguchi ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Tran Thanh Ha ◽  
Taweep Chaisomphob

Mobile LiDAR is an emerging advanced technology for capturing three-dimensional road information at a large scale effectively and precisely. Pole-like road facilities are crucial street infrastructures as they provide valuable information for road mapping and road inventory. Thus, the automated localization and classification of road facilities are necessary. This paper proposes a voxel-based method to detect and classify pole-like objects in an expressway environment based on the spatially independent and vertical height continuity analysis. First, the ground points are eliminated, and the nonground points are merged into clusters. Second, the pole-like objects are extracted using horizontal cross section analysis and minimum vertical height criteria. Finally, a set of knowledge-based rules, which comprise height features and geometric shape, is constructed to classify the detected road poles into different types of road facilities. Two test sites of point clouds in an expressway environment, which are located in Bangkok, Thailand, are used to assess the proposed method. The proposed method extracts the pole-like road facilities from two datasets with a detection rate of 95.1% and 93.5% and an overall quality of 89.7% and 98.0% in the classification stage, respectively. This shows that the algorithm could be a promising alternative for the localization and classification of pole-like road facilities with acceptable accuracy.


Author(s):  
Lei Wang ◽  
Jiaji Wu ◽  
Xunyu Liu ◽  
Xiaoliang Ma ◽  
Jun Cheng

AbstractThree-dimensional (3D) semantic segmentation of point clouds is important in many scenarios, such as automatic driving, robotic navigation, while edge computing is indispensable in the devices. Deep learning methods based on point sampling prove to be computation and memory efficient to tackle large-scale point clouds (e.g. millions of points). However, some local features may be abandoned while sampling. In this paper, We present one end-to-end 3D semantic segmentation framework based on dilated nearest neighbor encoding. Instead of down-sampling point cloud directly, we propose a dilated nearest neighbor encoding module to broaden the network’s receptive field to learn more 3D geometric information. Without increase of network parameters, our method is computation and memory efficient for large-scale point clouds. We have evaluated the dilated nearest neighbor encoding in two different networks. The first is the random sampling with local feature aggregation. The second is the Point Transformer. We have evaluated the quality of the semantic segmentation on the benchmark 3D dataset S3DIS, and demonstrate that the proposed dilated nearest neighbor encoding exhibited stable advantages over baseline and competing methods.


Author(s):  
S. Tanaka ◽  
K. Hasegawa ◽  
N. Okamoto ◽  
R. Umegaki ◽  
S. Wang ◽  
...  

We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 10<sup>7</sup> or 10<sup>8</sup> 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.


Author(s):  
S. Tanaka ◽  
K. Hasegawa ◽  
N. Okamoto ◽  
R. Umegaki ◽  
S. Wang ◽  
...  

We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 10<sup>7</sup> or 10<sup>8</sup> 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.


2017 ◽  
Vol 55 (9) ◽  
pp. 4839-4854 ◽  
Author(s):  
Yangbin Lin ◽  
Cheng Wang ◽  
Bili Chen ◽  
Dawei Zai ◽  
Jonathan Li

Sign in / Sign up

Export Citation Format

Share Document