scholarly journals Comparison of Relaxation Modulus Converted from Frequency- and Time-Dependent Viscoelastic Functions through Numerical Methods

2018 ◽  
Vol 8 (12) ◽  
pp. 2447 ◽  
Author(s):  
Weiguang Zhang ◽  
Bingyan Cui ◽  
Xingyu Gu ◽  
Qiao Dong

Due to the difficulty of obtaining relaxation modulus directly from experiments, many interconversion methods from other viscoelastic functions to relaxation modulus were developed in previous years. The objectives of this paper were to analyze the difference of relaxation modulus converted from dynamic modulus and creep compliance and explore its potential causes. The selected methods were the numerical interconversions based on Prony series representation. For the dynamic to relaxation conversion, the time spectrum was determined by the collocation method. Meanwhile, for the creep to relaxation conversion, both the collocation method and least squares method were adopted to perform the Laplace transform. The results show that these two methods do not present a significant difference in estimating relaxation modulus. Their difference mostly exists in the transient reduced time region. Calculating the average of two methods is suggested to avoid great deviation of single experiment. To predict viscoelastic responses from creep compliance, the collocation method yields comparable results to the least squares method. Thus, simply-calculated collocation method may be preferable in practice. Further, the master curve pattern is sensitive to the Prony series coefficients. The difference in transient reduced time region may be attributed to the indeterminate Prony series coefficients.

Author(s):  
Jannike Solsvik ◽  
Hugo Jakobsen

Two numerical methods in the family of weighted residual methods; the orthogonal collocation and least squares methods, are used within the spectral framework to solve a linear reaction-diffusion pellet problem with slab and spherical geometries. The node points are in this work taken as the roots of orthogonal polynomials in the Jacobi family. Two Jacobi polynomial parameters, alpha and beta, can be used to tune the distribution of the roots within the domain. Further, the internal points and the boundary points of the boundary-value problem can be given according to: i) Gauss-Lobatto-Jacobi points, or ii) Gauss-Jacobi points plus the boundary points. The objective of this paper is thus to investigate the influence of the distribution of the node points within the domain adopting the orthogonal collocation and least squares methods. Moreover, the results of the two numerical methods are compared to examine whether the methods show the same sensitivity and accuracy to the node point distribution. The notifying findings are as follows: i) The Legendre polynomial, i.e., alpha=beta=0, is a very robust Jacobi polynomial giving the better condition number of the coefficient matrix and the polynomial also give good behavior of the error as a function of polynomial order. This polynomial gives good results for small and large gradients within both slab and spherical pellet geometries. This trend is observed for both of the weighted residual methods applied. ii) Applying the least squares method the error decreases faster with increasing polynomial order than observed with the orthogonal collocation method. However, the orthogonal collocation method is not so sensitive to the choice of Jacobi polynomial and the method also obtains lower error values than the least squares method due to favorable lower condition numbers of the coefficient matrices. Thus, for this particular problem, the orthogonal collocation method is recommended above the least squares method. iii) The orthogonal collocation method show minor differences between Gauss-Lobatto-Jacobi points and Gauss-Jacobi plus boundary points.


1982 ◽  
Vol 58 (5) ◽  
pp. 213-219 ◽  
Author(s):  
Jean Beaulieu ◽  
Yvan J. Hardy

This paper presents a method of analysis which differentiates between spruce budworm caused mortality and regular mortality on balsam fir in the Gatineau region in Quebec. A first attempt was made using multiple linear regression and a uniform random number generator. In order to overcome the bias inherent to the least squares method when dealing with a binary (0,1) dependent variable, a profit analysis was also conducted. In this case, the parameters and their variance were estimated using likehood method. These two approaches proved to be equivalent when percent budworm caused mortality was compared within the 1958 to 1979 period covered by the data at hand, while the outbreak lasted from 1968 to 1975.In 1979, approximately 55% of the stems had been killed by the budworm, accounting for 53% of the volume. Maple-yellow birch associations were more affected than fir associations although no significant difference was found. Fir mortality was delayed by aerial spraying of insecticides but this advantage disappeared as soon as the spray operations came to an end.


Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 1873
Author(s):  
Konrad Kułakowski

One of the most popular methods of calculating priorities based on the pairwise comparisons matrices (PCM) is the geometric mean method (GMM). It is equivalent to the logarithmic least squares method (LLSM), so some use both names interchangeably, treating it as the same approach. The main difference, however, is in the way the calculations are done. It turns out, however, that a similar relationship holds for incomplete matrices. Based on Harker’s method for the incomplete PCM, and using the same substitution for the missing entries, it is possible to construct the geometric mean solution for the incomplete PCM, which is fully compatible with the existing LLSM for the incomplete PCM. Again, both approaches lead to the same results, but the difference is how the final solution is computed. The aim of this work is to present in a concise form, the computational method behind the geometric mean method (GMM) for an incomplete PCM. The computational method is presented to emphasize the relationship between the original GMM and the proposed solution. Hence, everyone who knows the GMM for a complete PCM should easily understand its proposed extension. Theoretical considerations are accompanied by a numerical example, allowing the reader to follow the calculations step by step.


Author(s):  
М.Е. Eskaliyev ◽  
◽  
А.А. Masimgazieva ◽  
N.A. Nurgali ◽  
◽  
...  

The article provides a general algorithm of the method of least squares (OLS) for the compilation of a program account, taking into account the features of the Gram matrix. In fact, the difference and advantage of the OLS from the long-known Langrange and Newton interpolation polynomials is emphasized. Using the general MNC algorithm, the square version of the MNC is considered. The characteristic of the full algorithm of the square version is given using special selected mathematical formulas. The scope of OLS in household and practical computational problems is indicated. The application of the least squares method has a wide range, especially for geographical forecasts, hydrometeorological control, and dosage of geological resources. Therefore, for applied calculations, DVI is used for more accurate calculation of approximate values of functions that are suitable in some values and are presented in the form of tables. Its main idea is to create a function and correct deviations caused by errors made during measurement.


2021 ◽  
Author(s):  
Yuhao Deng ◽  
Julan Xie ◽  
Zishu He ◽  
Jun Li

Abstract In this paper, a novel monopulse estimator is proposed to surmount obstacles caused by unknown polarized pattern and the difference among each dipole orientation. Polarized pattern often alters the phased array response and we could hardly recover it if we known nothing about polarized parameters. The sum and difference beamforming of conformal phased array is affected due to the difference among each dipole orientation. Therefore the conventional monopulse estimator is dumped in this circumstance. The method proposed in this paper is a remarkable estimator based on maximum likelihood methodology. In this method, polarized parameters are considered as a part of desired signal and the least-squares solution of desired signal is obtained. With the desired signal solution, the likelihood function with respect to direction is derived at first. Then from the above, Jacobian and Hessian matrix of likelihood function is deduced. The boresight is considered as the initial direction value and the estimator of desired signal direction is obtained by Newton's formula. Finally, the polarized parameters are estimated by least-squares method using the direction estimator. The root-mean-square error (RMSE) of angle estimation is acceptable when prior polarized information is completely unknown. Polarized parameters are estimated by similar technique after we find out azimuth and elevation. Our research fills a gap of monopulse estimation with arbitrary polarization and diverse dipole arrangement.


2010 ◽  
Vol 37 (8) ◽  
pp. 1071-1081 ◽  
Author(s):  
Sheng Hu ◽  
Fujie Zhou

The relaxation modulus E(t), creep compliance D(t), and complex modulus E*(ω) are functions often used to characterize the linear viscoelastic (LVE) behavior of hot mix asphalt (HMA). Interconversions among these LVE functions are often required. To perform an interconversion, one of the key steps is to express both the source and target LVE functions in Prony series representations. To obtain the corresponding Prony series coefficients, the collocation method and linear least squares method were often used in the past. However, the problem encountered with these two methods is in manually assigning part of the Prony series coefficients; resulting in unrealistic or negative Prony coefficients and big discrepancies between the fitting data and the original data. To address this problem, this paper developed a new algorithm by incorporating the Levenberg–Marquardt method. This new algorithm has four unique features, it (1) allows all the Prony series coefficients to be freely adjustable, (2) guarantees all positive Prony series coefficients, (3) determines all Prony series coefficients automatically and simultaneously, and (4) ensures very accurate interconversion through the fact that the fitting curve almost completely coincides with the original curve. Furthermore, to facilitate the implementation of practical applications of this new algorithm, it was incorporated into a stand-alone, windows-based software named “LVEmaster”. The simplicity and accuracy of this new interconversion software was demonstrated through a series of interconversions among HMA LVE functions.


Geophysics ◽  
1981 ◽  
Vol 46 (11) ◽  
pp. 1568-1571 ◽  
Author(s):  
B. A. Sissons

A least‐squares method for the direct inversion of surface and subsurface gravity measurements to obtain in situ density estimates is presented. The method is applied to a set of measurements made in a tunnel through the flank of an andesitic volcano. Densities obtained are [Formula: see text] for material in the top 100 m increasing to [Formula: see text] at about 200 m depth. The average density for rocks penetrated by the tunnel is, from laboratory measurements, [Formula: see text] i.e., about 4 percent higher. The difference is ascribed to joints and voids present in situ and not sampled in the laboratory specimens.


2010 ◽  
Vol 93 (6) ◽  
pp. 1844-1855 ◽  
Author(s):  
Hayam Mahmoud Lotfy ◽  
Amal Mahmoud Aboul Alamein ◽  
Maha Abdel Monem Hegazy

Abstract Simple, accurate, sensitive, and precise UV spectrophotometric, chemometric, and HPLC methods were developed for simultaneous determination of a two-component drug mixture of ezetimibe (EZ) and simvastatin (SM) in laboratory-prepared mixtures and a combined tablet dosage form. Four spectrophotometric methods were developed, namely, ratio spectra derivative, ratio subtraction, isosbestic point, and mean centering of ratio spectra. The developed chemometric-assisted spectrophotometric method was the concentration residual augmented classical least-squares method; its prediction ability was assessed and compared to the conventional partial least-squares method. The developed HPLC method used an RP ZORBAX C18 column (5 m particle size, 250 4.6 mm id) with isocratic elution. The mobile phase was acetonitrilepH 3.5 phosphate buffer (40 60, v/v) at a flow rate of 1.0 mL/min, with UV detection at 230 nm. The accuracy, precision, and linearity ranges of the developed methods were determined. The developed methods were successfully applied for determination of EZ and SM in bulk powder, laboratory-prepared mixtures, and a combined dosage form. The results obtained were compared statistically with each other and to those of a reported HPLC method; there was no significant difference between the proposed methods and the reported method regarding both accuracy and precision.


1961 ◽  
Vol 26 (3Part1) ◽  
pp. 416-420 ◽  
Author(s):  
Roberta S. Greenwood

AbstractThis shell study was designed to test and perfect methods of quantitative analysis in addition to providing the usual identification of species. Experiments were performed to determine the optimum size of sample, dimension of screen, number of samples, and the practicability of rapid analysis in the field to guide the progress of excavation. Analysis of variance was used to measure up to five variables at once; differences were shown graphically with direction of change indicated by a least-squares regression line. The analysis of 69 excavation units revealed an overwhelming preference for mud-dwelling species which showed no change in horizontal distribu-give guidance to the excavation team. Used with dry-weight analysis, field sorting would also indicate areas of richer occupation debris in time for this information to be useful. The use of applied mathematics in midden analysis adds a precise tool to the archaeological inventory. The data compiled on screen size, sample size, horizontal and vertical distributions, and such, were subjected to the analysis of variance, and where significant difference was indicated, the statistics were fitted to a regression line by the least-squares method. By using standardized systems and quantitative analysis, the archaeologist may obtain convincing evidence to support his conclusions. These procedures would be equally applicable to the study of skeletal, artifactual, or other ecological remains, and would add authority to the theories derived from such analysis.


Sign in / Sign up

Export Citation Format

Share Document