Valorisation of urban elements through 3D models generated from image matching point clouds and augmented reality visualization based in mobile platforms

Author(s):  
Luís F. E. S. C. Marques ◽  
Josep Roca ◽  
José A. Tenedório
Author(s):  
I.-C. Lee ◽  
F. Tsai

A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. <br><br> In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. <br><br> The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.


Author(s):  
A.-M. Boutsi ◽  
S. Verykokou ◽  
S. Soile ◽  
C. Ioannidis

Abstract. Augmented Reality (AR) is more than an added value for Cultural Heritage (CH); it is vital for its sustainability, promotion and dissemination, increasing accessibility in CH even during difficult periods of time, like the Covid-19 pandemic. In order to be meaningful and engaging, an AR application should have the following characteristics: easiness of use, high-quality representations and compatibility. This paper presents a marker-less mobile AR application for the display and inspection of high-resolution 3D cultural assets, overlayed on a particular location in the real-world scene. Instead of predefined markers, an image captured by the user is exploited as a pattern for real-time feature matching, pose estimation and scene augmentation. Our approach is based on pure computer vision and photogrammetric techniques, implemented using native C++ and Java code for Android mobile platforms. It is built with the use of the OpenCV library and the OpenGL ES graphics API without any dependencies of AR Software Development Kits (SDKs). Therefore, it supports cross-vendor portability regarding mobile model devices and hardware specifications. The evaluation of the developed application examines the performance of various matching techniques and the overall responsiveness of processing and 3D rendering on mid-range and low-end smartphones. The results showcase the reliability and responsiveness of the pattern recognition as well as the potential of the 3D graphics engine to render and overlay complex 3D models balancing between visual quality and time. The proposed methodology is applied to the Ciborium of the church of St. Charalabos, located at St. Stephen’s Monastery in Meteora, Greece.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Łukasz Halik ◽  
Maciej Smaczyński ◽  
Beata Medyńska-Gulij

<p><strong>Abstract.</strong> The attempt to work out the geomatic workflow of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the 3D model into the augmented reality (AR) system constitutes the main problem discussed in this article. The authors suggest the following workflow demonstrated in Fig. 1.</p><p>The series of pictures obtained by means of UAV equipped with a HD camera was the source of data to be worked out in the final stage of the geovisualization. The series was then processed and a few point clouds were isolated from it, being later used for generating test 3D models.</p><p>The practical aim of the research conducted was to work out, on the basis of the UAV pictures, the 3D geovisualization in the AR system that would depict the heap of the natural aggregate of irregular shape. The subsequent aim was to verify the accuracy of the produced 3D model. The object of the study was a natural aggregate heap of irregular shape and denivelations up to 11 meters.</p><p>Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20&amp;thinsp;000-meter-square area. The several-centimeter differences observed between the control points in the field and the ones from the model might corroborate the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. The method of transformation of pictures into the point cloud that was subsequently transformed into 3D models was employed in the research, resulting in the scheme depicting the technological sequence of the creation of 3D geovisualization worked out in the AR system. The geovisualization can be viewed thanks to a specially worked out mobile application for smartphones.</p>


Author(s):  
B. Alizadehashrafi ◽  
A. Abdul-Rahman

In this research project, many movies from UTM Kolej 9, Skudai, Johor Bahru (See Figure 1) were taken by AR. Drone 2. Since the AR drone 2.0 has liquid lens, while flying there were significant distortions and deformations on the converted pictures of the movies. Passive remote sensing (RS) applications based on image matching and Epipolar lines such as Agisoft PhotoScan have been tested to create the point clouds and mesh along with 3D models and textures. As the result was not acceptable (See Figure 2), the previous Dynamic Pulse Function based on Ruby programming language were enhanced and utilized to create the 3D models automatically in LoD3. The accuracy of the final 3D model is almost 10 to 20 cm. After rectification and parallel projection of the photos based on some tie points and targets, all the parameters were measured and utilized as an input to the system to create the 3D model automatically in LoD3 in a very high accuracy.


Author(s):  
E. Maltezos ◽  
C. Ioannidis

This study aims to detect automatically building points: (a) from LIDAR point cloud using simple techniques of filtering that enhance the geometric properties of each point, and (b) from a point cloud which is extracted applying dense image matching at high resolution colour-infrared (CIR) digital aerial imagery using the stereo method semi-global matching (SGM). At first step, the removal of the vegetation is carried out. At the LIDAR point cloud, two different methods are implemented and evaluated using initially the normals and the roughness values afterwards: (1) the proposed scan line smooth filtering and a thresholding process, and (2) a bilateral filtering and a thresholding process. For the case of the CIR point cloud, a variation of the normalized differential vegetation index (NDVI) is computed for the same purpose. Afterwards, the bare-earth is extracted using a morphological operator and removed from the rest scene so as to maintain the buildings points. The results of the extracted buildings applying each approach at an urban area in northern Greece are evaluated using an existing orthoimage as reference; also, the results are compared with the corresponding classified buildings extracted from two commercial software. Finally, in order to verify the utility and functionality of the extracted buildings points that achieved the best accuracy, the 3D models in terms of Level of Detail 1 (LoD 1) and a 3D building change detection process are indicatively performed on a sub-region of the overall scene.


Author(s):  
G. Mandlburger

In the last years, the tremendous progress in image processing and camera technology has reactivated the interest in photogrammetrybased surface mapping. With the advent of Dense Image Matching (DIM), the derivation of height values on a per-pixel basis became feasible, allowing the derivation of Digital Elevation Models (DEM) with a spatial resolution in the range of the ground sampling distance of the aerial images, which is often below 10&amp;thinsp;cm today. While mapping topography and vegetation constitutes the primary field of application for image based surface reconstruction, multi-spectral images also allow to see through the water surface to the bottom underneath provided sufficient water clarity. In this contribution, the feasibility of through-water dense image matching for mapping shallow water bathymetry using off-the-shelf software is evaluated. In a case study, the SURE software is applied to three different coastal and inland water bodies. After refraction correction, the DIM point clouds and the DEMs derived thereof are compared to concurrently acquired laser bathymetry data. The results confirm the general suitability of through-water dense image matching, but sufficient bottom texture and favorable environmental conditions (clear water, calm water surface) are a preconditions for achieving accurate results. Water depths of up to 5&amp;thinsp;m could be mapped with a mean deviation between laser and trough-water DIM in the dm-range. Image based water depth estimates, however, become unreliable in case of turbid or wavy water and poor bottom texture.


Author(s):  
J. Zhu ◽  
Y. Xu ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> In this work, we discussed how to directly combine thermal infrared image (TIR) and the point cloud without additional assistance from GCPs or 3D models. Specifically, we propose a point-based co-registration process for combining the TIR image and the point cloud for the buildings. The keypoints are extracted from images and point clouds via primitive segmentation and corner detection, then pairs of corresponding points are identified manually. After that, the estimated camera pose can be computed with EPnP algorithm. Finally, the point cloud with thermal information provided by IR images can be generated as a result, which is helpful in the tasks such as energy inspection, leakage detection, and abnormal condition monitoring. This paper provides us more insight about the probability and ideas about the combining TIR image and point cloud.</p>


2020 ◽  
Vol 4 (1) ◽  
pp. 11-22
Author(s):  
Deli Deli

Implementation of Augmented Reality for Earth Layer Structure on Android Based as A Learning Media isa research that aims to help in presenting material to Elementary School children. The research methodchosen in the completion of this study uses the 4D method (Define, Design, Develop and Disseminate) witha data collecting method using Technology Acceptance Model (TAM) built one construct with threedimensions of user assessment level of technology acceptance to support the basis of questionnaire design.AR design supported by 3D models, in order to be able to support the details of each explanation of thematerial contained, thus helping users to understand the material and ease of interaction on the media.The final result obtained in this research is that the application is stated to be able to help the school, it is used as a media display in the classroom so students do not need to imagine themselves, but simply byusing learning media is able to present the material to students.Keywords: Learning Media, 4D Method, User Acceptance Test, Augmented reality, Android.


Author(s):  
Y. Q. Dong ◽  
L. Zhang ◽  
X. M. Cui ◽  
H. B. Ai

Although many filter algorithms have been presented over past decades, these algorithms are usually designed for the Lidar point clouds and can’t separate the ground points from the DIM (dense image matching, DIM) point clouds derived from the oblique aerial images owing to the high density and variation of the DIM point clouds completely. To solve this problem, a new automatic filter algorithm is developed on the basis of adaptive TIN models. At first, the differences between Lidar and DIM point clouds which influence the filtering results are analysed in this paper. To avoid the influences of the plants which can’t be penetrated by the DIM point clouds in the searching seed pointes process, the algorithm makes use of the facades of buildings to get ground points located on the roads as seed points and construct the initial TIN. Then a new densification strategy is applied to deal with the problem that the densification thresholds do not change as described in other methods in each iterative process. Finally, we use the DIM point clouds located in Potsdam produced by Photo-Scan to evaluate the method proposed in this paper. The experiment results show that the method proposed in this paper can not only separate the ground points from the DIM point clouds completely but also obtain the better filter results compared with TerraSolid. 1.


Author(s):  
L. Zhang ◽  
P. van Oosterom ◽  
H. Liu

Abstract. Point clouds have become one of the most popular sources of data in geospatial fields due to their availability and flexibility. However, because of the large amount of data and the limited resources of mobile devices, the use of point clouds in mobile Augmented Reality applications is still quite limited. Many current mobile AR applications of point clouds lack fluent interactions with users. In our paper, a cLoD (continuous level-of-detail) method is introduced to filter the number of points to be rendered considerably, together with an adaptive point size rendering strategy, thus improve the rendering performance and remove visual artifacts of mobile AR point cloud applications. Our method uses a cLoD model that has an ideal distribution over LoDs, with which can remove unnecessary points without sudden changes in density as present in the commonly used discrete level-of-detail approaches. Besides, camera position, orientation and distance from the camera to point cloud model is taken into consideration as well. With our method, good interactive visualization of point clouds can be realized in the mobile AR environment, with both nice visual quality and proper resource consumption.


Sign in / Sign up

Export Citation Format

Share Document