Time Series Control Charts for Correlated and Contaminated Data

1986 ◽  
Vol 108 (3) ◽  
pp. 219-226 ◽  
Author(s):  
B. D. Notohardjono ◽  
D. S. Ermer

This paper discusses the development of control charts for correlated and contaminated data. For illustration the charts were applied to a set of maximum principal-stress data at two locations on a blast furnace shell. The Dynamic Data System (DDS) approach was used to model the correlated data which contained several types of discrepancies. After the standard DDS models were found, control charts for the averages and variances of the model residuals were constructed for two data sets. For more effective analysis, two methods for calculating the control limits for both charts are given. With this approach, dynamic process change, such as an increase in the production rate or the wearing out of the sacrificial lining, can be detected and separated from data with collection errors from instrument malfunctions. Furthermore, the tap hole opening timing is identified from the DDS model parameters, to help verify the time series model.

1986 ◽  
Vol 108 (4) ◽  
pp. 322-327 ◽  
Author(s):  
K. J. Dooley ◽  
S. G. Kapoor ◽  
M. I. Dessouky ◽  
R. E. DeVor

An integrated quality systems methodology is presented as a framework within which the concepts of process control can be used to improve quality and productivity. The process is mathematically described by stochastic time series models which statistically describe how inputs and outputs interact. Several different methods for fault identification, including autocorrelation checks of the model residuals, forecasting prediction intervals, and the cusum chart are compared in terms of relative performance. A helix cable manufacturing process is simulated and analyzed by the methodology and faults are identified and suggestions are made for process improvement. Through the simulation these time series control chart methods are shown to be much more effective than conventional methods such as Shewhart control charts.


2015 ◽  
Vol 1 (3) ◽  
pp. 238-248
Author(s):  
Romeo Mawonik ◽  
Vinscent Nkomo

Statistical Process Control (SPC) uses statistical techniques to improve the quality of a process reducing its variability. The main tools of SPC are the control charts. The basic idea of control charts is to test the hypothesis that there are only common causes of variability versus the alternative that there are special causes. Control charts are designed and evaluated under the assumption that the observations from the process are independent and identically distributed (IID) normal. However, the independence assumption is often violated in practice. Autocorrelation may be present in many procedures, and may have a significant effect on the properties of the control charts.Thus, traditional SPC charts are inappropriate for monitoring process quality. In this study, wepresent methods for process control that deal with auto correlated data and a method based on time series ARIMA models (Box Jenkins Methodology). We apply the typical Cumulative Sum (CUSUM) and Exponentially Weighted Moving Average (EWMA) charts as SPC techniques and the time-series method in determining packaging process quality.


Author(s):  
SANDY D. BALKIN ◽  
DENNIS K. J. LIN

Sensitizing Rules are commonly applied to Shewhart Charts to increase their effectiveness in detecting shifts in the mean that may otherwise go unnoticed by the usual "out-of-control" signals. The purpose of this paper is to demonstrate how well these rules actually perform when the data exhibit autocorrelation compared to non-correlated data. Since most control chart data are collected as time series, it is of interest to examine the performance of Shewhart's [Formula: see text] Chart using data generated from typical time series models. In this paper, measurements arising from autoregressive (AR), moving average (MA) and autoregressive moving average (ARMA) processes are examined using Shewhart Control Charts in conjunction with several sensitizing rules. The results indicate that the rules work well when there are strong autocorrelative relationships, but are not as effective in recognizing small to moderate levels of correlation. We conclude with the recommendation to practitioners that they use a more definitive measure of autocorrelation such as the Sample Autocorrelation Function correlogram to detect dependency.


1984 ◽  
Vol 30 (104) ◽  
pp. 66-76 ◽  
Author(s):  
Paul A. Mayewski ◽  
W. Berry Lyons ◽  
N. Ahmad ◽  
Gordon Smith ◽  
M. Pourchet

AbstractSpectral analysis of time series of a c. 17 ± 0.3 year core, calibrated for total ß activity recovered from Sentik Glacier (4908m) Ladakh, Himalaya, yields several recognizable periodicities including subannual, annual, and multi-annual. The time-series, include both chemical data (chloride, sodium, reactive iron, reactive silicate, reactive phosphate, ammonium, δD, δ(18O) and pH) and physical data (density, debris and ice-band locations, and microparticles in size grades 0.50 to 12.70 μm). Source areas for chemical species investigated and general air-mass circulation defined from chemical and physical time-series are discussed to demonstrate the potential of such studies in the development of paleometeorological data sets from remote high-alpine glacierized sites such as the Himalaya.


Author(s):  
Cong Gao ◽  
Ping Yang ◽  
Yanping Chen ◽  
Zhongmin Wang ◽  
Yue Wang

AbstractWith large deployment of wireless sensor networks, anomaly detection for sensor data is becoming increasingly important in various fields. As a vital data form of sensor data, time series has three main types of anomaly: point anomaly, pattern anomaly, and sequence anomaly. In production environments, the analysis of pattern anomaly is the most rewarding one. However, the traditional processing model cloud computing is crippled in front of large amount of widely distributed data. This paper presents an edge-cloud collaboration architecture for pattern anomaly detection of time series. A task migration algorithm is developed to alleviate the problem of backlogged detection tasks at edge node. Besides, the detection tasks related to long-term correlation and short-term correlation in time series are allocated to cloud and edge node, respectively. A multi-dimensional feature representation scheme is devised to conduct efficient dimension reduction. Two key components of the feature representation trend identification and feature point extraction are elaborated. Based on the result of feature representation, pattern anomaly detection is performed with an improved kernel density estimation method. Finally, extensive experiments are conducted with synthetic data sets and real-world data sets.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1850
Author(s):  
Rashad A. R. Bantan ◽  
Farrukh Jamal ◽  
Christophe Chesneau ◽  
Mohammed Elgarhy

Unit distributions are commonly used in probability and statistics to describe useful quantities with values between 0 and 1, such as proportions, probabilities, and percentages. Some unit distributions are defined in a natural analytical manner, and the others are derived through the transformation of an existing distribution defined in a greater domain. In this article, we introduce the unit gamma/Gompertz distribution, founded on the inverse-exponential scheme and the gamma/Gompertz distribution. The gamma/Gompertz distribution is known to be a very flexible three-parameter lifetime distribution, and we aim to transpose this flexibility to the unit interval. First, we check this aspect with the analytical behavior of the primary functions. It is shown that the probability density function can be increasing, decreasing, “increasing-decreasing” and “decreasing-increasing”, with pliant asymmetric properties. On the other hand, the hazard rate function has monotonically increasing, decreasing, or constant shapes. We complete the theoretical part with some propositions on stochastic ordering, moments, quantiles, and the reliability coefficient. Practically, to estimate the model parameters from unit data, the maximum likelihood method is used. We present some simulation results to evaluate this method. Two applications using real data sets, one on trade shares and the other on flood levels, demonstrate the importance of the new model when compared to other unit models.


2021 ◽  
Vol 5 (1) ◽  
pp. 10
Author(s):  
Mark Levene

A bootstrap-based hypothesis test of the goodness-of-fit for the marginal distribution of a time series is presented. Two metrics, the empirical survival Jensen–Shannon divergence (ESJS) and the Kolmogorov–Smirnov two-sample test statistic (KS2), are compared on four data sets—three stablecoin time series and a Bitcoin time series. We demonstrate that, after applying first-order differencing, all the data sets fit heavy-tailed α-stable distributions with 1<α<2 at the 95% confidence level. Moreover, ESJS is more powerful than KS2 on these data sets, since the widths of the derived confidence intervals for KS2 are, proportionately, much larger than those of ESJS.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Johnson A. Adewara ◽  
Kayode S. Adekeye ◽  
Olubisi L. Aako

In this paper, two methods of control chart were proposed to monitor the process based on the two-parameter Gompertz distribution. The proposed methods are the Gompertz Shewhart approach and Gompertz skewness correction method. A simulation study was conducted to compare the performance of the proposed chart with that of the skewness correction approach for various sample sizes. Furthermore, real-life data on thickness of paint on refrigerators which are nonnormal data that have attributes of a Gompertz distribution were used to illustrate the proposed control chart. The coverage probability (CP), control limit interval (CLI), and average run length (ARL) were used to measure the performance of the two methods. It was found that the Gompertz exact method where the control limits are calculated through the percentiles of the underline distribution has the highest coverage probability, while the Gompertz Shewhart approach and Gompertz skewness correction method have the least CLI and ARL. Hence, the two-parameter Gompertz-based methods would detect out-of-control faster for Gompertz-based X¯ charts.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


Sign in / Sign up

Export Citation Format

Share Document