scholarly journals Iterative multilinear optimization for planar model fitting under geometric constraints

2021 ◽  
Vol 7 ◽  
pp. e691
Author(s):  
Jorge Azorin-Lopez ◽  
Marc Sebban ◽  
Andres Fuster-Guillo ◽  
Marcelo Saval-Calvo ◽  
Amaury Habrard

Planes are the core geometric models present everywhere in the three-dimensional real world. There are many examples of manual constructions based on planar patches: facades, corridors, packages, boxes, etc. In these constructions, planar patches must satisfy orthogonal constraints by design (e.g. walls with a ceiling and floor). The hypothesis is that by exploiting orthogonality constraints when possible in the scene, we can perform a reconstruction from a set of points captured by 3D cameras with high accuracy and a low response time. We introduce a method that can iteratively fit a planar model in the presence of noise according to three main steps: a clustering-based unsupervised step that builds pre-clusters from the set of (noisy) points; a linear regression-based supervised step that optimizes a set of planes from the clusters; a reassignment step that challenges the members of the current clusters in a way that minimizes the residuals of the linear predictors. The main contribution is that the method can simultaneously fit different planes in a point cloud providing a good accuracy/speed trade-off even in the presence of noise and outliers, with a smaller processing time compared with previous methods. An extensive experimental study on synthetic data is conducted to compare our method with the most current and representative methods. The quantitative results provide indisputable evidence that our method can generate very accurate models faster than baseline methods. Moreover, two case studies for reconstructing planar-based objects using a Kinect sensor are presented to provide qualitative evidence of the efficiency of our method in real applications.

Author(s):  
Mehmet Niyazi Çankaya

The systematic sampling is used as a method to get the quantitative results from the tissues and the radiological images. Systematic sampling on real line (R) is a very attractive method within which the biomedical imaging is consulted by the practitioners. For the systematic sampling on R, the measurement function (MF) is occurred by slicing the three dimensional object equidistant  systematically. If the parameter q of MF is estimated to be small enough for mean square error, we can make the important remarks for the design-based stereology. This study is an extension of [17], and an exact calculation method is proposed to calculate the constant λ(q,N) of confidence interval in the systematic sampling. In the results, synthetic data can support the results of real data. The currently used covariogram model in variance approximation proposed by [28,29] is tested for the different measurement functions to see the performance on the variance estimation of systematically sampled R. The exact value of constant λ(q,N) is examined for the different measurement functions as well.


Author(s):  
Mehmet Niyazi Çankaya

The systematic sampling is used as a method to get the quantitative results from the tissues and the radiological images. Systematic sampling on real line (R) is a very attractive method within which the biomedical imaging is consulted by the practitioners. For the systematic sampling on R, the measurement function (MF) is occurred by slicing the three-dimensional object equidistant  systematically. The currently used covariogram model in variance approximation proposed by [28,29] is tested for the different measurement functions in a class to see the performance on the variance estimation of systematically sampled R. This study is an extension of [17], and an exact calculation method is proposed to calculate the constant λ(q,N) of confidence interval in the systematic sampling. The exact value of constant λ(q,N) is examined for the different measurement functions as well. As a result, it is observed from the simulation that the proposed MF should be used to check the performances of the variance approximation and the constant λ(q,N). Synthetic data can support the results of real data.


Author(s):  
Mehmet Niyazi Çankaya

The systematic sampling is used as a method to get the quantitative results from the tissues and the radiological images. Systematic sampling on real line (R) is a very attractive method within which the biomedical imaging is consulted by the practitioners. For the systematic sampling on R, the measurement function (MF) is occurred by slicing the three-dimensional object equidistant  systematically. The currently used covariogram model in variance approximation proposed by [28,29] is tested for the different measurement functions in a class to see the performance on the variance estimation of systematically sampled R. This study is an extension of [17], and an exact calculation method is proposed to calculate the constant λ(q,N) of confidence interval in the systematic sampling. The exact value of constant λ(q,N) is examined for the different measurement functions as well. As a result, it is observed from the simulation that the proposed MF should be used to check the performances of the variance approximation and the constant λ(q,N). Synthetic data can support the results of real data.


Author(s):  
Mehmet Niyazi Çankaya

Systematic sampling on real line (R) when using the different probes is very attractive method within which the biomedical imaging is consulted by a surgery, etc. This study is an extension of [16], andan exact calculation method is proposed for the calculation of constant λq of confidence interval for the systematic sampling. If the smoothness constant q of measurement function occurred by slicing the three dimensional object is estimated to be enough small mean square error, we can make the important remarks for the design-based stereology used as a method to get the quantitative results from the tissues and the radiological images. Synthetic data can support the results of real data. The currently used covariogram model proposed by [28] is tested for the different measurement functions to see the performance on the variance estimation. The exact value of constant λq is examined for the different measurement functions as well.


Author(s):  
Christopher J. Arthurs ◽  
Nan Xiao ◽  
Philippe Moireau ◽  
Tobias Schaeffter ◽  
C. Alberto Figueroa

AbstractA major challenge in constructing three dimensional patient specific hemodynamic models is the calibration of model parameters to match patient data on flow, pressure, wall motion, etc. acquired in the clinic. Current workflows are manual and time-consuming. This work presents a flexible computational framework for model parameter estimation in cardiovascular flows that relies on the following fundamental contributions. (i) A Reduced-Order Unscented Kalman Filter (ROUKF) model for data assimilation for wall material and simple lumped parameter network (LPN) boundary condition model parameters. (ii) A constrained least squares augmentation (ROUKF-CLS) for more complex LPNs. (iii) A “Netlist” implementation, supporting easy filtering of parameters in such complex LPNs. The ROUKF algorithm is demonstrated using non-invasive patient-specific data on anatomy, flow and pressure from a healthy volunteer. The ROUKF-CLS algorithm is demonstrated using synthetic data on a coronary LPN. The methods described in this paper have been implemented as part of the CRIMSON hemodynamics software package.


2017 ◽  
Vol 22 (4) ◽  
pp. 901-919 ◽  
Author(s):  
M. Graba

Abstract This paper provides a comparative analysis of selected parameters of the geometric constraints for cracked plates subjected to tension. The results of three-dimensional numerical calculations were used to assess the distribution of these parameters around the crack front and their changes along the crack front. The study also involved considering the influence of the external load on the averaged values of the parameters of the geometric constraints as well as the relationship between the material constants and the level of the geometric constraints contributing to the actual fracture toughness for certain geometries.


2017 ◽  
Vol 12 (1) ◽  
pp. 38-47 ◽  
Author(s):  
Marcin Staniek

The paper presents the stereo vision method for the mapping of road pavement. The mapped road is a set of points in three-dimensional space. The proposed method of measurement and its implementation make it possible to generate a precise mapping of a road surface with a resolution of 1 mm in transverse, longitudinal and vertical dimensions. Such accurate mapping of the road is the effect of application of stereo images based on image processing technologies. The use of matching measure CoVar, at the stage of images matching, help eliminate corner detection and filter stereo images, maintaining the effectiveness of the algorithm mapping. The proper analysis of image-based data and application of mathematical transformations enable to determine many types of distresses such as potholes, patches, bleedings, cracks, ruts and roughness. The paper also aims at comparing the results of proposed solution and reference test-bench. The statistical analysis of the differences permits the judgment of error types.


Geophysics ◽  
1990 ◽  
Vol 55 (9) ◽  
pp. 1166-1182 ◽  
Author(s):  
Irshad R. Mufti

Finite‐difference seismic models are commonly set up in 2-D space. Such models must be excited by a line source which leads to different amplitudes than those in the real data commonly generated from a point source. Moreover, there is no provision for any out‐of‐plane events. These problems can be eliminated by using 3-D finite‐difference models. The fundamental strategy in designing efficient 3-D models is to minimize computational work without sacrificing accuracy. This was accomplished by using a (4,2) differencing operator which ensures the accuracy of much larger operators but requires many fewer numerical operations as well as significantly reduced manipulation of data in the computer memory. Such a choice also simplifies the problem of evaluating the wave field near the subsurface boundaries of the model where large operators cannot be used. We also exploited the fact that, unlike the real data, the synthetic data are free from ambient noise; consequently, one can retain sufficient resolution in the results by optimizing the frequency content of the source signal. Further computational efficiency was achieved by using the concept of the exploding reflector which yields zero‐offset seismic sections without the need to evaluate the wave field for individual shot locations. These considerations opened up the possibility of carrying out a complete synthetic 3-D survey on a supercomputer to investigate the seismic response of a large‐scale structure located in Oklahoma. The analysis of results done on a geophysical workstation provides new insight regarding the role of interference and diffraction in the interpretation of seismic data.


Sign in / Sign up

Export Citation Format

Share Document