Fast Conic Spline Data Fitting of Noise-Free Data Points

Author(s):  
Hadi Mansourifar ◽  
Azam Bastanfard
Keyword(s):  
Geophysics ◽  
1973 ◽  
Vol 38 (6) ◽  
pp. 1088-1108 ◽  
Author(s):  
Joseph R. Inman ◽  
Jisoo Ryu ◽  
Stanley H. Ward

The problem of direct interpretation of apparent resistivity curves from horizontally layered models is accomplished by using the generalized linear inverse theory. The method permits the resolution of the model parameters to be determined. The method also indicates which data points contain relatively important information necessary to resolve the model parameters. Two models were chosen to test the inversion scheme. One model has increasing resistivity with depth, and the other model possesses an intermediate resistive layer. Both models were resolved with a very high degree of accuracy from noise‐free data. When noise was added to the data, the values of the parameters oscillated about a mean value. The noise had little effect on the well‐resolved parameters but the poorly resolved parameters were in error by as much as 15 percent. The importance of each data point relative to the model was analyzed. The effect of certain data points on specific parameters was also determined. The generalized inverse method requires that the eigenvalues and eigenvectors of the system matrix be found. A comparison of the eigenvalues indicates those parameters that are well‐resolved and those that are poorly resolved from a given set of data.


Author(s):  
P. Venkataraman

Bezier functions, which are Bezier curves constrained to behave like functions, are excellent for representing smooth and continuous function of high degree, over the entire range of the independent variable. They provide excellent solutions to systems of linear, nonlinear, ordinary and partial differential equations. In this paper we examine their usefulness for data approximation as a prelude to their use in solving the inverse problem. Bezier function and B-splines, which are related, have mostly been used in geometrical modeling. There are not many examples of their use in data analysis. In this paper, organized and unorganized data are used to illustrate the effectiveness of Bezier functions for data approximation, reduction, mining, transformation, and prediction. Two criteria are used for the overall data fitting process. A simple incremental strategy identifies the order of the function using the minimum of the sum of the absolute error over all data. For a given order of the function, the least squared error over all data identifies the Bezier function through a non iterative algebraic relation. The entire data can then be represented by the coefficients of the Bezier function. Alternately, the data can also be reduced to a polynomial based on a parameter varying between 0 and 1. The Bezier function is global over all of the data so that all data points, including interpolated data, have the same properties. Three important properties are explicit is using Bezier functions for data analysis. The mean of the original data and the approximate data are the same. Large orders of polynomials can be used without local distortion. The independent and dependent variables can be decoupled by the Bezier representation. The data fitting process can also filter noisy data to recover principal data behavior.


2019 ◽  
Vol 11 (20) ◽  
pp. 2342 ◽  
Author(s):  
Yang ◽  
Luo ◽  
Huang ◽  
Wu ◽  
Sun

The time series (TS) of the normalized difference vegetation index (NDVI) has been widely used to trace the temporal and spatial variability of terrestrial vegetation. However, many factors such as atmospheric noise and radiometric correction residuals conceal the actual variation in the land surface, and thus hamper the TS information extraction. To minimize the negative effects of these noise factors, we propose a new method to produce a synthetic gap-free NDVI TS from the original contaminated observation. First, the key temporal points are identified from the NDVI time profiles based on a generally used rule-based strategy, making the TS segmented into several adjacent segments. Then, the observed data points in each segment are fitted with a weighted double-logistic function. The proposed dynamic weight reassignment process effectively emphasizes cloud-free points and deemphasizes cloud-contaminated points. Finally, the proposed method is evaluated on more than 3,000 test points from three selected Sentinel-2 tiles, and is compared with the generally used Savitzky-Golay (S-G) and harmonic analysis of time series (HANTS) methods from qualitative and quantitative aspects. The results indicate that the proposed method has a higher capability of retaining cloud-free data points and identifying outliers than the others, and can generate a gap-free NDVI time profile derived from a medium-resolution satellite sensor.


Author(s):  
Zenji Horita ◽  
Ryuzo Nishimachi ◽  
Takeshi Sano ◽  
Minoru Nemoto

Absorption correction is often required in quantitative x-ray microanalysis of thin specimens using the analytical electron microscope. For such correction, it is convenient to use the extrapolation method[l] because the thickness, density and mass absorption coefficient are not necessary in the method. The characteristic x-ray intensities measured for the analysis are only requirement for the absorption correction. However, to achieve extrapolation, it is imperative to obtain data points more than two at different thicknesses in the identical composition. Thus, the method encounters difficulty in analyzing a region equivalent to beam size or the specimen with uniform thickness. The purpose of this study is to modify the method so that extrapolation becomes feasible in such limited conditions. Applicability of the new form is examined by using a standard sample and then it is applied to quantification of phases in a Ni-Al-W ternary alloy.The earlier equation for the extrapolation method was formulated based on the facts that the magnitude of x-ray absorption increases with increasing thickness and that the intensity of a characteristic x-ray exhibiting negligible absorption in the specimen is used as a measure of thickness.


2008 ◽  
Author(s):  
Kristie Nemeth ◽  
Nicole Arbuckle ◽  
Andrea Snead ◽  
Drew Bowers ◽  
Christopher Burneka ◽  
...  

1997 ◽  
Vol 78 (02) ◽  
pp. 855-858 ◽  
Author(s):  
Armando Tripodi ◽  
Veena Chantarangkul ◽  
Marigrazia Clerici ◽  
Barbara Negri ◽  
Pier Mannuccio Mannucci

SummaryA key issue for the reliable use of new devices for the laboratory control of oral anticoagulant therapy with the INR is their conformity to the calibration model. In the past, their adequacy has mostly been assessed empirically without reference to the calibration model and the use of International Reference Preparations (IRP) for thromboplastin. In this study we reviewed the requirements to be fulfilled and applied them to the calibration of a new near-patient testing device (TAS, Cardiovascular Diagnostics) which uses thromboplastin-containing test cards for determination of the INR. On each of 10 working days citrat- ed whole blood and plasma samples were obtained from 2 healthy subjects and 6 patients on oral anticoagulants. PT testing on whole blood and plasma was done with the TAS and parallel testing for plasma by the manual technique with the IRP CRM 149S. Conformity to the calibration model was judged satisfactory if the following requirements were met: (i) there was a linear relationship between paired log-PTs (TAS vs CRM 149S); (ii) the regression line drawn through patients data points, passed through those of normals; (iii) the precision of the calibration expressed as the CV of the slope was <3%. A good linear relationship was observed for calibration plots for plasma and whole blood (r = 0.98). Regression lines drawn through patients data points, passed through those of normals. The CVs of the slope were in both cases 2.2% and the ISIs were 0.965 and 1.000 for whole blood and plasma. In conclusion, our study shows that near-patient testing devices can be considered reliable tools to measure INR in patients on oral anticoagulants and provides guidelines for their evaluation.


Author(s):  
Uppuluri Sirisha ◽  
G. Lakshme Eswari

This paper briefly introduces Internet of Things(IOT) as a intellectual connectivity among the physical objects or devices which are gaining massive increase in the fields like efficiency, quality of life and business growth. IOT is a global network which is interconnecting around 46 million smart meters in U.S. alone with 1.1 billion data points per day[1]. The total installation base of IOT connecting devices would increase to 75.44 billion globally by 2025 with a increase in growth in business, productivity, government efficiency, lifestyle, etc., This paper familiarizes the serious concern such as effective security and privacy to ensure exact and accurate confidentiality, integrity, authentication access control among the devices.


Author(s):  
Ryan Ka Yau Lai ◽  
Youngah Do

This article explores a method of creating confidence bounds for information-theoretic measures in linguistics, such as entropy, Kullback-Leibler Divergence (KLD), and mutual information. We show that a useful measure of uncertainty can be derived from simple statistical principles, namely the asymptotic distribution of the maximum likelihood estimator (MLE) and the delta method. Three case studies from phonology and corpus linguistics are used to demonstrate how to apply it and examine its robustness against common violations of its assumptions in linguistics, such as insufficient sample size and non-independence of data points.


1979 ◽  
Vol 7 (1) ◽  
pp. 3-13
Author(s):  
F. C. Brenner ◽  
A. Kondo

Abstract Tread wear data are frequently fitted by a straight line having average groove depth as the ordinate and mileage as the abscissa. The authors have observed that the data points are not randomly scattered about the line but exist in runs of six or seven points above the line followed by the same number below the line. Attempts to correlate these cyclic deviations with climatic data failed. Harmonic content analysis of the data for each individual groove showed strong periodic behavior. Groove 1, a shoulder groove, had two important frequencies at 40 960 and 20 480 km (25 600 and 12 800 miles); Grooves 2 and 3, the inside grooves, had important frequencies at 10 240, 13 760, and 20 480 km (6400, 8600, and 12 800 miles), with Groove 4 being similar. A hypothesis is offered as a possible explanation for the phenomenon.


Sign in / Sign up

Export Citation Format

Share Document