scholarly journals A ROBUST ESTIMATION TECHNIQUE FOR 3D POINT CLOUD REGISTRATION

2016 ◽  
Vol 35 (1) ◽  
pp. 15 ◽  
Author(s):  
Dhanya S Pankaj ◽  
Rama Rao Nidamanuri

The 3D modeling pipeline involves registration of partially overlapping 3D scans of an object. The automatic pairwise coarse alignment of partially overlapping 3D images is generally performed using 3D feature matching. The transformation estimation from matched features generally requires robust estimation due to the presence of outliers. RANSAC is a method of choice in problems where model estimation is to be done from data samples containing outliers. The number of RANSAC iterations depends on the number of data points and inliers to the model. Convergence of RANSAC can be very slow in the case of large number of outliers. This paper presents a novel algorithm for the 3D registration task which provides more accurate results in lesser computational time compared to RANSAC. The proposed algorithm is also compared against the existing modifications of RANSAC for 3D pairwise registration. The results indicate that the proposed algorithm tends to obtain the best 3D transformation matrix in lesser time compared to the other algorithms.

Author(s):  
A. Christodoulou ◽  
P. van Oosterom

<p><strong>Abstract.</strong> In this paper, a method is proposed for solving relative translations of 3D point clouds collected by Mobile Laser Scanning (MLS) techniques. The proposed approach uses the attributes of the 3D points to generate and match 2D-projections, by employing a simple correlation technique instead of matching in 3D. As a result, the developed method depends more on the number of pixels in the 2D-projections and less on the number of points in the point clouds. This leads to a more cost-efficient method in contrast to 3D registration techniques. The method uses this benefit to provide redundant translation parameters for each point cloud pair. With the utilization of image-based evaluation criteria the reliable translation parameters are detected and only those are used to compute the final solution. Consequently, the confidence levels of each final estimation can be computed. In addition, an indication of robustness showing how many estimations where included for the computation of the final solution is included. It is shown that the method performs fast due to its simplicity especially when medium image resolution’s such as 0.15<span class="thinspace"></span>m are used. Reliable matches can be produced even when the overlap of the point cloud sets is small or the initial offset large as long as the offsets are distinguishable in the projections. Furthermore, a technique is proposed to obtain capabilities for sub-pixel accuracy estimations, as the accuracy of the estimations is restricted to the grid cell size. The technique seems promising, but further improvement is necessary.</p>


2021 ◽  
Vol 11 (5) ◽  
pp. 2268
Author(s):  
Erika Straková ◽  
Dalibor Lukáš ◽  
Zdenko Bobovský ◽  
Tomáš Kot ◽  
Milan Mihola ◽  
...  

While repairing industrial machines or vehicles, recognition of components is a critical and time-consuming task for a human. In this paper, we propose to automatize this task. We start with a Principal Component Analysis (PCA), which fits the scanned point cloud with an ellipsoid by computing the eigenvalues and eigenvectors of a 3-by-3 covariant matrix. In case there is a dominant eigenvalue, the point cloud is decomposed into two clusters to which the PCA is applied recursively. In case the matching is not unique, we continue to distinguish among several candidates. We decompose the point cloud into planar and cylindrical primitives and assign mutual features such as distance or angle to them. Finally, we refine the matching by comparing the matrices of mutual features of the primitives. This is a more computationally demanding but very robust method. We demonstrate the efficiency and robustness of the proposed methodology on a collection of 29 real scans and a database of 389 STL (Standard Triangle Language) models. As many as 27 scans are uniquely matched to their counterparts from the database, while in the remaining two cases, there is only one additional candidate besides the correct model. The overall computational time is about 10 min in MATLAB.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


METRON ◽  
2021 ◽  
Author(s):  
Giovanni Saraceno ◽  
Claudio Agostinelli ◽  
Luca Greco

AbstractA weighted likelihood technique for robust estimation of multivariate Wrapped distributions of data points scattered on a $$p-$$ p - dimensional torus is proposed. The occurrence of outliers in the sample at hand can badly compromise inference for standard techniques such as maximum likelihood method. Therefore, there is the need to handle such model inadequacies in the fitting process by a robust technique and an effective downweighting of observations not following the assumed model. Furthermore, the employ of a robust method could help in situations of hidden and unexpected substructures in the data. Here, it is suggested to build a set of data-dependent weights based on the Pearson residuals and solve the corresponding weighted likelihood estimating equations. In particular, robust estimation is carried out by using a Classification EM algorithm whose M-step is enhanced by the computation of weights based on current parameters’ values. The finite sample behavior of the proposed method has been investigated by a Monte Carlo numerical study and real data examples.


2021 ◽  
Vol 13 (19) ◽  
pp. 3796
Author(s):  
Lei Fan ◽  
Yuanzhi Cai

Laser scanning is a popular means of acquiring the indoor scene data of buildings for a wide range of applications concerning indoor environment. During data acquisition, unwanted data points beyond the indoor space of interest can also be recorded due to the presence of openings, such as windows and doors on walls. For better visualization and further modeling, it is beneficial to filter out those data, which is often achieved manually in practice. To automate this process, an efficient image-based filtering approach was explored in this research. In this approach, a binary mask image was created and updated through mathematical morphology operations, hole filling and connectively analysis. The final mask obtained was used to remove the data points located outside the indoor space of interest. The application of the approach to several point cloud datasets considered confirms its ability to effectively keep the data points in the indoor space of interest with an average precision of 99.50%. The application cases also demonstrate the computational efficiency (0.53 s, at most) of the approach proposed.


Author(s):  
S. Horache ◽  
F. Goulette ◽  
J.-E. Deschaud ◽  
T. Lejars ◽  
K. Gruel

Abstract. The recognition and clustering of coins which have been struck by the same die is of interest for archeological studies. Nowadays, this work can only be performed by experts and is very tedious. In this paper, we propose a method to automatically cluster dies, based on 3D scans of coins. It is based on three steps: registration, comparison and graph-based clustering. Experimental results on 90 coins coming from a Celtic treasury from the II-Ith century BC show a clustering quality equivalent to expert’s work.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2015 ◽  
Vol 52 (12) ◽  
pp. 122801
Author(s):  
刘志青 Liu Zhiqing ◽  
李鹏程 Li Pengcheng ◽  
张保明 Zhang Baoming ◽  
郭海涛 Guo Haitao ◽  
丁磊 Ding Lei

Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 896 ◽  
Author(s):  
Abhinuv Uppal ◽  
Vanessa Ferdinand ◽  
Sarah Marzen

Cognitive systems exhibit astounding prediction capabilities that allow them to reap rewards from regularities in their environment. How do organisms predict environmental input and how well do they do it? As a prerequisite to answering that question, we first address the limits on prediction strategy inference, given a series of inputs and predictions from an observer. We study the special case of Bayesian observers, allowing for a probability that the observer randomly ignores data when building her model. We demonstrate that an observer’s prediction model can be correctly inferred for binary stimuli generated from a finite-order Markov model. However, we can not necessarily infer the model’s parameter values unless we have access to several “clones” of the observer. As stimuli become increasingly complicated, correct inference requires exponentially more data points, computational power, and computational time. These factors place a practical limit on how well we are able to infer an observer’s prediction strategy in an experimental or observational setting.


2015 ◽  
Vol 764-765 ◽  
pp. 1375-1379 ◽  
Author(s):  
Cheng Tiao Hsieh

This paper aims at presenting a simple approach utilizing a Kinect-based scanner to create models available for 3D printing or other digital manufacturing machines. The outputs of Kinect-based scanners are a depth map and they usually need complicated computational processes to prepare them ready for a digital fabrication. The necessary processes include noise filtering, point cloud alignment and surface reconstruction. Each process may require several functions and algorithms to accomplish these specific tasks. For instance, the Iterative Closest Point (ICP) is frequently used in a 3D registration and the bilateral filter is often used in a noise point filtering process. This paper attempts to develop a simple Kinect-based scanner and its specific modeling approach without involving the above complicated processes.The developed scanner consists of an ASUS’s Xtion Pro and rotation table. A set of organized point cloud can be generated by the scanner. Those organized point clouds can be aligned precisely by a simple transformation matrix instead of the ICP. The surface quality of raw point clouds captured by Kinect are usually rough. For this drawback, this paper introduces a solution to obtain a smooth surface model. Inaddition, those processes have been efficiently developed by free open libraries, VTK, Point Cloud Library and OpenNI.


Sign in / Sign up

Export Citation Format

Share Document