Displacement and Velocity Kinematic Synthesis

1968 ◽  
Vol 90 (3) ◽  
pp. 527-529 ◽  
Author(s):  
D. W. Lewis ◽  
G. L. Falkenhagen

One may wish to define the velocity transformation in addition to the displacement transformation for synthesizing a mechanism. The approach presented here is essentially that noted as the “damped least squares” that allows for successive adjustment of the parameters which define a particular type mechanism. The repetitive application of this process results in a convergence toward an optimum approximation of the displacement and velocity transformation curves which are described by a series of data points. The method has been applied to a four-bar linkage as an example of application; the approach or technique is general and is not limited in use to any specific mechanism.

1967 ◽  
Vol 89 (1) ◽  
pp. 173-175 ◽  
Author(s):  
D. W. Lewis ◽  
C. K. Gyory

The coupler point curve of a plane mechanism is a curve that may be described by a series of paired coordinates. An extension of the method of “damped least squares” provides a means for successive adjustment of the parameters which define a particular type mechanism. Repetitive application of this process will result in a convergence toward an optimum approximation to the desired curve as described by the series of paired coordinates. The method has been applied to a four-bar linkage as an example of application.


2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Author(s):  
Vassilios E. Theodoracatos ◽  
Vasudeva Bobba

Abstract In this paper an approach is presented for the generation of a NURBS (Non-Uniform Rational B-splines) surface from a large set of 3D data points. The main advantage of NURBS surface representation is the ability to analytically describe both, precise quadratic primitives and free-form curves and surfaces. An existing three dimensional laser-based vision system is used to obtain the spatial point coordinates of an object surface with respect to a global coordinate system. The least-squares approximation technique is applied in both the image and world space of the digitized physical object to calculate the homogeneous vector and the control net of the NURBS surface. A new non-uniform knot vectorization process is developed based on five data parametrization techniques including four existing techniques, viz., uniform, chord length, centripetal, and affine invariant angle and a new technique based on surface area developed in this study. Least-squares error distribution and surface interrogation are used to evaluate the quality of surface fairness for a minimum number of NURBS control points.


Author(s):  
Bo Wang ◽  
Chen Sun ◽  
Keming Zhang ◽  
Jubing Chen

Abstract As a representative type of outlier, the abnormal data in displacement measurement often inevitably occurred in full-field optical metrology and significantly affected the further evaluation, especially when calculating the strain field by differencing the displacement. In this study, an outlier removal method is proposed which can recognize and remove the abnormal data in optically measured displacement field. A iterative critical factor least squares algorithm (CFLS) is developed which distinguishes the distance between the data points and the least square plane to identify the outliers. A successive boundary point algorithm is proposed to divide the measurement domain to improve the applicability and effectiveness of the CFLS algorithm. The feasibility and precision of the proposed method are discussed in detail through simulations and experiments. Results show that the outliers are reliably recognized and the precision of the strain estimation is highly improved by using these methods.


1977 ◽  
Vol 14 (02) ◽  
pp. 411-415 ◽  
Author(s):  
E. J. Hannan ◽  
Marek Kanter

The least squares estimators β i(N), j = 1, …, p, from N data points, of the autoregressive constants for a stationary autoregressive model are considered when the disturbances have a distribution attracted to a stable law of index α < 2. It is shown that N1/δ(β i(N) – β) converges almost surely to zero for any δ > α. Some comments are made on alternative definitions of the βi (N).


Sign in / Sign up

Export Citation Format

Share Document