Feature Grouping–based Trajectory Outlier Detection over Distributed Streams

2021 ◽  
Vol 12 (2) ◽  
pp. 1-23
Author(s):  
Jiali Mao ◽  
Jiaye Liu ◽  
Cheqing Jin ◽  
Aoying Zhou

Owing to a wide variety of deployment of GPS -enabled devices, tremendous amounts of trajectories have been generated in distributed stream manner. It opens up new opportunities to track and analyze the moving behaviors of the entities. In this work, we focus on the issue of outlier detection over distributed trajectory streams, where the outliers refer to a few entities whose motion behaviors are significantly different from their local neighbors. In view of skewed distribution property and evolving nature of trajectory data, and on-the-fly detection requirement over distributed streams, we first design a high-efficiency outlier detection solution. It consists of identifying abnormal trajectory fragment and exceptional fragment cluster at the remote sites and then detecting abnormal evolving object at the coordinator site. Further, given that outlier detection accuracy would be damaged due to using inappropriate proximity thresholds or a few trajectory data not having sufficient neighbors at the remote sites, we extract proximity thresholds of different regions and spatial context relationship of each region from historical data to improve the precision. Built upon this is an improved version consisting of off-line modeling phase and on-line detection phase. During the on-line phase, the proximity thresholds that are derived from historical trajectories during the off-line phase are leveraged to assist in detecting abnormal trajectory fragments and exceptional fragment clusters at the remote sites. Additionally, at the coordinator site, the detection results of some remote sites can be refined by incorporating those of other remote sites with neighborhood relationship. Extensive experimental results on real data demonstrate that our proposed methods own high detection validity, less communication cost and linear scalability for online identifying outliers over distributed trajectory streams.

Author(s):  
Hansi Jiang ◽  
Haoyu Wang ◽  
Wenhao Hu ◽  
Deovrat Kakde ◽  
Arin Chaudhuri

Support vector data description (SVDD) is a machine learning technique that is used for single-class classification and outlier detection. The idea of SVDD is to find a set of support vectors that defines a boundary around data. When dealing with online or large data, existing batch SVDD methods have to be rerun in each iteration. We propose an incremental learning algorithm for SVDD that uses the Gaussian kernel. This algorithm builds on the observation that all support vectors on the boundary have the same distance to the center of sphere in a higher-dimensional feature space as mapped by the Gaussian kernel function. Each iteration involves only the existing support vectors and the new data point. Moreover, the algorithm is based solely on matrix manipulations; the support vectors and their corresponding Lagrange multiplier αi’s are automatically selected and determined in each iteration. It can be seen that the complexity of our algorithm in each iteration is only O(k2), where k is the number of support vectors. Experimental results on some real data sets indicate that FISVDD demonstrates significant gains in efficiency with almost no loss in either outlier detection accuracy or objective function value.


Author(s):  
Courtney Deine-Jones

As more libraries offer patron access to the Internet and other on-line services, they must consider the needs of patrons with disabilities who will be using their Internet links either from the library or from remote sites. In planning and implementing technological improvements to optimize access for all patrons, librarians and information specialists must take into account questions of both physical and intellectual access to electronic information. This paper addresses these issues from a pragmatic perspective, reviewing available options and suggesting strategies for improving access for people with various disabilities.


2021 ◽  
Vol 13 (9) ◽  
pp. 1703
Author(s):  
He Yan ◽  
Chao Chen ◽  
Guodong Jin ◽  
Jindong Zhang ◽  
Xudong Wang ◽  
...  

The traditional method of constant false-alarm rate detection is based on the assumption of an echo statistical model. The target recognition accuracy rate and the high false-alarm rate under the background of sea clutter and other interferences are very low. Therefore, computer vision technology is widely discussed to improve the detection performance. However, the majority of studies have focused on the synthetic aperture radar because of its high resolution. For the defense radar, the detection performance is not satisfactory because of its low resolution. To this end, we herein propose a novel target detection method for the coastal defense radar based on faster region-based convolutional neural network (Faster R-CNN). The main processing steps are as follows: (1) the Faster R-CNN is selected as the sea-surface target detector because of its high target detection accuracy; (2) a modified Faster R-CNN based on the characteristics of sparsity and small target size in the data set is employed; and (3) soft non-maximum suppression is exploited to eliminate the possible overlapped detection boxes. Furthermore, detailed comparative experiments based on a real data set of coastal defense radar are performed. The mean average precision of the proposed method is improved by 10.86% compared with that of the original Faster R-CNN.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 238
Author(s):  
Jakub Šalplachta ◽  
Tomáš Zikmund ◽  
Marek Zemek ◽  
Adam Břínek ◽  
Yoshihiro Takeda ◽  
...  

In this article, we introduce a new ring artifacts reduction procedure that combines several ideas from existing methods into one complex and robust approach with a goal to overcome their individual weaknesses and limitations. The procedure differentiates two types of ring artifacts according to their cause and character in computed tomography (CT) data. Each type is then addressed separately in the sinogram domain. The novel iterative schemes based on relative total variations (RTV) were integrated to detect the artifacts. The correction process uses the image inpainting, and the intensity deviations smoothing method. The procedure was implemented in scope of lab-based X-ray nano CT with detection systems based on charge-coupled device (CCD) and scientific complementary metal–oxide–semiconductor (sCMOS) technologies. The procedure was then further tested and optimized on the simulated data and the real CT data of selected samples with different compositions. The performance of the procedure was quantitatively evaluated in terms of the artifacts’ detection accuracy, the comparison with existing methods, and the ability to preserve spatial resolution. The results show a high efficiency of ring removal and the preservation of the original sample’s structure.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.


2003 ◽  
Vol 75 (14) ◽  
pp. 3596-3605 ◽  
Author(s):  
Yufeng Shen ◽  
Ronald J. Moore ◽  
Rui Zhao ◽  
Josip Blonder ◽  
Deanna L. Auberry ◽  
...  

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Laura Millán-Roures ◽  
Irene Epifanio ◽  
Vicente Martínez

A functional data analysis (FDA) based methodology for detecting anomalous flows in urban water networks is introduced. Primary hydraulic variables are recorded in real-time by telecontrol systems, so they are functional data (FD). In the first stage, the data are validated (false data are detected) and reconstructed, since there could be not only false data, but also missing and noisy data. FDA tools are used such as tolerance bands for FD and smoothing for dense and sparse FD. In the second stage, functional outlier detection tools are used in two phases. In Phase I, the data are cleared of anomalies to ensure that data are representative of the in-control system. The objective of Phase II is system monitoring. A new functional outlier detection method is also proposed based on archetypal analysis. The methodology is applied and illustrated with real data. A simulated study is also carried out to assess the performance of the outlier detection techniques, including our proposal. The results are very promising.


Sign in / Sign up

Export Citation Format

Share Document