Integration of Image and Laser Scanning Data Based on Selected Example

2014 ◽  
Vol 19 (2-3) ◽  
pp. 37-44 ◽  
Author(s):  
Sławomir Mikrut ◽  
Agnieszka Moskal ◽  
Urszula Marmol

Abstract The paper aims at presentation of results of research on integration of image and laser data based on selected example. Since a few years the authors have been conducting research on processing image data, and those obtained from laser scanning in the form of the so-called point cloud. In experiments data from terrestrial and mobile laser scanning gained for two different objects were compared: a parish house from Goźlice located in the open-air ethnographic museum at the village of Tokarnia, Poland, and part of the Cracow-Warsaw railway line. The results of those experiments proved that data in the form of point cloud were not always sufficient for a precise 3D model reconstruction. Supplementing point clouds with photogrammetric images seems to be the best solution.

Author(s):  
L. Jurjević ◽  
M. Gašparović

Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.


Author(s):  
H.-J. Przybilla ◽  
M. Lindstaedt ◽  
T. Kersten

<p><strong>Abstract.</strong> The quality of image-based point clouds generated from images of UAV aerial flights is subject to various influencing factors. In addition to the performance of the sensor used (a digital camera), the image data format (e.g. TIF or JPG) is another important quality parameter. At the UAV test field at the former Zollern colliery (Dortmund, Germany), set up by Bochum University of Applied Sciences, a medium-format camera from Phase One (IXU 1000) was used to capture UAV image data in RAW format. This investigation aims at evaluating the influence of the image data format on point clouds generated by a Dense Image Matching process. Furthermore, the effects of different data filters, which are part of the evaluation programs, were considered. The processing was carried out with two software packages from Agisoft and Pix4D on the basis of both generated TIF or JPG data sets. The point clouds generated are the basis for the investigation presented in this contribution. Point cloud comparisons with reference data from terrestrial laser scanning were performed on selected test areas representing object-typical surfaces (with varying surface structures). In addition to these area-based comparisons, selected linear objects (profiles) were evaluated between the different data sets. Furthermore, height point deviations from the dense point clouds were determined using check points. Differences in the results generated through the two software packages used could be detected. The reasons for these differences are filtering settings used for the generation of dense point clouds. It can also be assumed that there are differences in the algorithms for point cloud generation which are implemented in the two software packages. The slightly compressed JPG image data used for the point cloud generation did not show any significant changes in the quality of the examined point clouds compared to the uncompressed TIF data sets.</p>


2012 ◽  
Vol 490-495 ◽  
pp. 143-146
Author(s):  
Miao Gong ◽  
Hao Wang ◽  
Li Wen Wang

This paper made the 3D model reconstruction of the J34 turban blade. First, collected rough points cloud data by using visual measuring equipment. Then, smoothed and filtered the point cloud data, took the rational simplification, finished pre-processing the point cloud data. Finally, the Laplacian of Guassian Detection was used for fitting the edge of turban blade, and reconstructed the 3D digital model. The results proved that this method improved smoothness of the model, and reduced time and cost of modeling and machining.


2014 ◽  
Vol 971-973 ◽  
pp. 1357-1360
Author(s):  
Hong Mei Yu ◽  
Zi Qi Wang

Studied the modeling strategy and application features of rapid surfacing reconstruction system. The 3D digital model reconstruction of real mobile phone as an example, using non-contact optical 3D scanning point cloud data access to mobile phone, get the mobile phone CAD model through data processing by point stage, polygon stage and shape stage, and the various stages of the process and the target are discussed in this paper.


2021 ◽  
Vol 7 ◽  
pp. e529
Author(s):  
Ghada M. Fathy ◽  
Hanan A. Hassan ◽  
Walaa Sheta ◽  
Fatma A. Omara ◽  
Emad Nabil

Occlusion awareness is one of the most challenging problems in several fields such as multimedia, remote sensing, computer vision, and computer graphics. Realistic interaction applications are suffering from dealing with occlusion and collision problems in a dynamic environment. Creating dense 3D reconstruction methods is the best solution to solve this issue. However, these methods have poor performance in practical applications due to the absence of accurate depth, camera pose, and object motion.This paper proposes a new framework that builds a full 3D model reconstruction that overcomes the occlusion problem in a complex dynamic scene without using sensors’ data. Popular devices such as a monocular camera are used to generate a suitable model for video streaming applications. The main objective is to create a smooth and accurate 3D point-cloud for a dynamic environment using cumulative information of a sequence of RGB video frames. The framework is composed of two main phases. The first uses an unsupervised learning technique to predict scene depth, camera pose, and objects’ motion from RGB monocular videos. The second generates a frame-wise point cloud fusion to reconstruct a 3D model based on a video frame sequence. Several evaluation metrics are measured: Localization error, RMSE, and fitness between ground truth (KITTI’s sparse LiDAR points) and predicted point-cloud. Moreover, we compared the framework with different widely used state-of-the-art evaluation methods such as MRE and Chamfer Distance. Experimental results showed that the proposed framework surpassed the other methods and proved to be a powerful candidate in 3D model reconstruction.


Author(s):  
P. Delis ◽  
M. Zacharek ◽  
D. Wierzbicki ◽  
A. Grochala

The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object) and Sony NEX-VG10 E (for the historic building). In both cases, a Sony α f&amp;thinsp;=&amp;thinsp;16&amp;thinsp;mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85&amp;thinsp;% or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.


2011 ◽  
Vol 48 (8) ◽  
pp. 081201
Author(s):  
Nguyen Tien Thanh Nguyen Tien Thanh ◽  
刘修国 Liu Xiuguo ◽  
王红平 Wang Hongping ◽  
于明旭 Yu Mingxu ◽  
周文浩 Zhou Wenhao

Author(s):  
C. Kim ◽  
H. Moon ◽  
W. Lee

To rescue peoples in the disaster site in time, information acquisition of current feature of collapsed buildings and terrain is quite important for disaster site rescue manager. Based on information about disaster site, they can accurately plan the rescue process and remove collapsed buildings or other facilities. However, due to the harsh condition of disaster areas, rapid and accurate acquisition of disaster site information is not an easy task. There are possibilities of further damages in the collapse and there are also difficulties in acquiring information about current disaster situation due to large disaster site and limited rescue resources. To overcome these circumstances of disaster sites, an unmanned aerial vehicle, commonly known as a drone is used to rapidly and effectively acquire current image data of the large disaster areas. Then, the procedure of drone-based 3D model reconstruction visualization function of developed system is presented.


Sign in / Sign up

Export Citation Format

Share Document