scholarly journals A Review and Comparison of Changepoint Detection Techniques for Climate Data

2007 ◽  
Vol 46 (6) ◽  
pp. 900-915 ◽  
Author(s):  
Jaxk Reeves ◽  
Jien Chen ◽  
Xiaolan L. Wang ◽  
Robert Lund ◽  
Qi Qi Lu

Abstract This review article enumerates, categorizes, and compares many of the methods that have been proposed to detect undocumented changepoints in climate data series. The methods examined include the standard normal homogeneity (SNH) test, Wilcoxon’s nonparametric test, two-phase regression (TPR) procedures, inhomogeneity tests, information criteria procedures, and various variants thereof. All of these methods have been proposed in the climate literature to detect undocumented changepoints, but heretofore there has been little formal comparison of the techniques on either real or simulated climate series. This study seeks to unify the topic, showing clearly the fundamental differences among the assumptions made by each procedure and providing guidelines for which procedures work best in different situations. It is shown that the common trend TPR and Sawa’s Bayes criteria procedures seem optimal for most climate time series, whereas the SNH procedure and its nonparametric variant are probably best when trend and periodic effects can be diminished by using homogeneous reference series. Two applications to annual mean temperature series are given. Directions for future research are discussed.

2008 ◽  
Vol 25 (3) ◽  
pp. 368-384 ◽  
Author(s):  
Xiaolan L. Wang

Abstract In this study, a penalized maximal F test (PMFT) is proposed for detecting undocumented mean shifts that are not accompanied by any sudden change in the linear trend of time series. PMFT aims to even out the uneven distribution of false alarm rate and detection power of the corresponding unpenalized maximal F test that is based on a common-trend two-phase regression model (TPR3). The performance of PMFT is compared with that of TPR3 using Monte Carlo simulations and real climate data series. It is shown that, due to the effect of unequal sample sizes, the false alarm rate of TPR3 has a W-shaped distribution, with much higher than specified values for points near the ends of the series and lower values for points between either of the ends and the middle of the series. Consequently, for a mean shift of certain magnitude, TPR3 would detect it with a lower-than-specified level of confidence and hence more easily when it occurs near the ends of the series than somewhere between either of the ends and the middle of the series; it would mistakenly declare many more changepoints near the ends of a homogeneous series. These undesirable features of TPR3 are diminished in PMFT by using an empirical penalty function to take into account the relative position of each point being tested. As a result, PMFT has a notably higher power of detection; its false alarm rate and effective level of confidence are very close to the nominal level, basically evenly distributed across all possible candidate changepoints. The improvement in hit rate can be more than 10% for detecting small shifts (Δ ≤ σ, where σ is the noise standard deviation).


2007 ◽  
Vol 46 (6) ◽  
pp. 916-931 ◽  
Author(s):  
Xiaolan L. Wang ◽  
Qiuzi H. Wen ◽  
Yuehua Wu

Abstract In this paper, a penalized maximal t test (PMT) is proposed for detecting undocumented mean shifts in climate data series. PMT takes the relative position of each candidate changepoint into account, to diminish the effect of unequal sample sizes on the power of detection. Monte Carlo simulation studies are conducted to evaluate the performance of PMT, in comparison with the most popularly used method, the standard normal homogeneity test (SNHT). An application of the two methods to atmospheric pressure series recorded at a Canadian site is also presented. It is shown that the false-alarm rate of PMT is very close to the specified level of significance and is evenly distributed across all candidate changepoints, whereas that of SNHT can be up to 10 times the specified level for points near the ends of series and much lower for the middle points. In comparison with SNHT, therefore, PMT has higher power for detecting all changepoints that are not too close to the ends of series and lower power for detecting changepoints that are near the ends of series. On average, however, PMT has significantly higher power of detection. The smaller the shift magnitude Δ is relative to the noise standard deviation σ, the greater is the improvement of PMT over SNHT. The improvement in hit rate can be as much as 14%–25% for detecting small shifts (Δ < σ) regardless of time series length and up to 5% for detecting medium shifts (Δ = σ–1.5σ) in time series of length N < 100. For all detectable shift sizes, the largest improvement is always obtained when N < 100, which is of great practical importance, because most annual climate data series are of length N < 100.


2020 ◽  
Vol 14 ◽  
Author(s):  
Meghna Dhalaria ◽  
Ekta Gandotra

Purpose: This paper provides the basics of Android malware, its evolution and tools and techniques for malware analysis. Its main aim is to present a review of the literature on Android malware detection using machine learning and deep learning and identify the research gaps. It provides the insights obtained through literature and future research directions which could help researchers to come up with robust and accurate techniques for classification of Android malware. Design/Methodology/Approach: This paper provides a review of the basics of Android malware, its evolution timeline and detection techniques. It includes the tools and techniques for analyzing the Android malware statically and dynamically for extracting features and finally classifying these using machine learning and deep learning algorithms. Findings: The number of Android users is expanding very fast due to the popularity of Android devices. As a result, there are more risks to Android users due to the exponential growth of Android malware. On-going research aims to overcome the constraints of earlier approaches for malware detection. As the evolving malware are complex and sophisticated, earlier approaches like signature based and machine learning based are not able to identify these timely and accurately. The findings from the review shows various limitations of earlier techniques i.e. requires more detection time, high false positive and false negative rate, low accuracy in detecting sophisticated malware and less flexible. Originality/value: This paper provides a systematic and comprehensive review on the tools and techniques being employed for analysis, classification and identification of Android malicious applications. It includes the timeline of Android malware evolution, tools and techniques for analyzing these statically and dynamically for the purpose of extracting features and finally using these features for their detection and classification using machine learning and deep learning algorithms. On the basis of the detailed literature review, various research gaps are listed. The paper also provides future research directions and insights which could help researchers to come up with innovative and robust techniques for detecting and classifying the Android malware.


2021 ◽  
pp. 1-21
Author(s):  
Shahela Saif ◽  
Samabia Tehseen

Deep learning has been used in computer vision to accomplish many tasks that were previously considered too complex or resource-intensive to be feasible. One remarkable application is the creation of deepfakes. Deepfake images change or manipulate a person’s face to give a different expression or identity by using generative models. Deepfakes applied to videos can change the facial expressions in a manner to associate a different speech with a person than the one originally given. Deepfake videos pose a serious threat to legal, political, and social systems as they can destroy the integrity of a person. Research solutions are being designed for the detection of such deepfake content to preserve privacy and combat fake news. This study details the existing deepfake video creation techniques and provides an overview of the deepfake datasets that are publicly available. More importantly, we provide an overview of the deepfake detection methods, along with a discussion on the issues, challenges, and future research directions. The study aims to present an all-inclusive overview of deepfakes by providing insights into the deepfake creation techniques and the latest detection methods, facilitating the development of a robust and effective deepfake detection solution.


2017 ◽  
Author(s):  
Imke Hans ◽  
Martin Burgdorf ◽  
Viju O. John ◽  
Jonathan Mittaz ◽  
Stefan A. Buehler

Abstract. The microwave humidity sounders Special Sensor Microwave Water Vapour Profiler (SSMT-2), Advanced Microwave Sounding Unit-B (AMSU-B) and Microwave Humidity Sounder (MHS) to date have been providing data records for 25 years. So far, the data records lack uncertainty information essential for constructing consistent long time data series. In this study, we assess the quality of the recorded data with respect to the uncertainty caused by noise. We calculate the noise on the raw calibration counts from the deep space view (DSV) of the instrument and the Noise Equivalent Differential Temperature (NEΔT) as a measure for the radiometer sensitivity. For this purpose, we use the Allan Deviation that is not biased from an underlying varying mean of the data and that has been suggested only recently for application in atmospheric remote sensing. Moreover, we use the bias function related to the Allan Deviation to infer the underlying spectrum of the noise. As examples, we investigate the noise spectrum in flight for some instruments. For the assessment of the noise evolution in time, we provide a descriptive and graphical overview of the calculated NEΔT over the life span of each instrument and channel. This overview can serve as an easily accessible information for users interested in the noise performance of a specific instrument, channel and time. Within the time evolution of the noise, we identify periods of instrumental degradation, which manifest themselves in an increasing NEΔT, and periods of erratic behaviour, which show sudden increases of NEΔT interrupting the overall smooth evolution of the noise. From this assessment and subsequent exclusion of the aforementioned periods, we present a chart showing available data records with NEΔT < K. Due to overlapping life spans of the instruments, these reduced data records still cover without gaps the time since 1994 and may therefore serve as first step for constructing long time series. Our method for count noise estimation, that has been used in this study, will be used in the data processing to provide input values for the uncertainty propagation in the generation of a new set of Fundamental Climate Data Records (FCDR) that are currently produced in the project Fidelity and Uncertainty in Climate data records from Earth Observation (FIDUCEO).


2018 ◽  
Vol 10 (1) ◽  
pp. 181-196 ◽  
Author(s):  
Mehdi Bahrami ◽  
Samira Bazrkar ◽  
Abdol Rassoul Zarei

Abstract Drought as an exigent natural phenomenon, with high frequency in arid and semi-arid regions, leads to enormous damage to agriculture, economy, and environment. In this study, the seasonal Standardized Precipitation Index (SPI) drought index and time series models were employed to model and predict seasonal drought using climate data of 38 Iranian synoptic stations during 1967–2014. In order to model and predict seasonal drought ITSM (Interactive Time Series Modeling) statistical software was used. According to the calculated seasonal SPI, within the study area, drought severity classes 4 and 3 had the greatest occurrence frequency, while classes 6 and 7 had the least occurrence frequency. Results indicated that the best fitted models were Moving-Average or MA (5) Innovations and MA (5) Hannan-Rissenen, with 60.53 and 15.79 percentage, respectively. On the other hand, results of the prediction as well, indicated that drought class 4 with the highest percentages, was the most abundant class over the study area and drought class 7 was the least frequent class. According to results of trend analysis, without attention to significance of them, observed seasonal SPI data series (1967–2014), in 84.21% of synoptic stations had a negative trend, but this percentage changes to 86.84% when studying the combination of observed and predicted simultaneously (1967–2019).


2013 ◽  
pp. 1111-1123
Author(s):  
Moi Hoon Yap ◽  
Hassan Ugail

The application of computer vision in face processing remains an important research field. The aim of this chapter is to provide an up-to-date review of research efforts of computer vision scientist in facial image processing, especially in the areas of entertainment industry, surveillance, and other human computer interaction applications. To be more specific, this chapter reviews and demonstrates the techniques of visible facial analysis, regardless of specific application areas. First, the chapter makes a thorough survey and comparison of face detection techniques. It provides some demonstrations on the effect of computer vision algorithms and colour segmentation on face images. Then, it reviews the facial expression recognition from the psychological aspect (Facial Action Coding System, FACS) and from the computer animation aspect (MPEG-4 Standard). The chapter also discusses two popular existing facial feature detection techniques: Gabor feature based boosted classifiers and Active Appearance Models, and demonstrate the performance on our in-house dataset. Finally, the chapter concludes with the future challenges and future research direction of facial image processing.


2019 ◽  
Vol 9 (2-3) ◽  
pp. 99-123 ◽  
Author(s):  
Lisa van der Werff ◽  
Alison Legood ◽  
Finian Buckley ◽  
Antoinette Weibel ◽  
David de Cremer

Theorizing about trust has focused predominantly on cognitive trust cues such as trustworthiness, portraying the trustor as a relatively passive observer reacting to the attributes of the other party. Using self-determination and control theories of motivation, we propose a model of trust motivation that explores the intraindividual processes involved in the volitional aspects of trust decision-making implied by the definition of trust as a willingness to be vulnerable. We distinguish between intrinsic and extrinsic drivers of trust and propose a two-phase model of trust goal setting and trust regulation. Our model offers a dynamic view of the trusting process and a framework for understanding how trust cognition, affect and behavior interact over time. Furthermore, we discuss how trust goals may be altered or abandoned via a feedback loop during the trust regulation process. We conclude with a discussion of potential implications for existing theory and future research.


Polymers ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1337 ◽  
Author(s):  
Shouzheng Sun ◽  
Zhenyu Han ◽  
Hongya Fu ◽  
Hongyu Jin ◽  
Jaspreet Singh Dhupia ◽  
...  

Automated fiber placement (AFP) is an advanced manufacturing method for composites, which is especially suitable for large-scale composite components. However, some manufacturing defects inevitably appear in the AFP process, which can affect the mechanical properties of composites. This work aims to investigate the recent works on manufacturing defects and their online detection techniques during the AFP process. The main content focuses on the position defect in conventional and variable stiffness laminates, the relationship between the defects and the mechanical properties, defect control methods, the modeling method for a void defect, and online detection techniques. Following that, the contributions and limitations of the current studies are discussed. Finally, the prospects of future research concerning theoretical and practical engineering applications are pointed out.


Sign in / Sign up

Export Citation Format

Share Document