scholarly journals An IMM Filter Defined in the Linear-Circular Domain, Application to Maneuver Detection with Heading Only

2018 ◽  
Vol 2018 ◽  
pp. 1-14
Author(s):  
Karim El mokhtari ◽  
Serge Reboul ◽  
Georges Stienne ◽  
Jean Bernard Choquel ◽  
Benaissa Amami ◽  
...  

In this article, we propose a multimodel filter for circular data. The so-called Circular Interacting Multimodel filter is derived in a Bayesian framework with the circular normal von Mises distribution. The aim of the proposed filter is to obtain the same performance in the circular domain as the classical IMM filter in the linear domain. In our approach, the mixing and fusion stages of the Circular Interacting Multimodel filter are, respectively, defined from the a priori and from the a posteriori circular distributions of the state angle knowing the measurements and according to a set of models. We propose in this article a set of circular models that will be used in order to detect the vehicle maneuvers from heading measurements. The Circular Interacting Multimodel filter performances are assessed on synthetic data and we show on real data a vehicle maneuver detection application.

2020 ◽  
Vol 15 ◽  
pp. 42-51
Author(s):  
Shou-Jen Chang-Chien ◽  
Wajid Ali ◽  
Miin-Shen Yang

Clustering is a method for analyzing grouped data. Circular data were well used in various applications, such as wind directions, departure directions of migrating birds or animals, etc. The expectation & maximization (EM) algorithm on mixtures of von Mises distributions is popularly used for clustering circular data. In general, the EM algorithm is sensitive to initials and not robust to outliers in which it is also necessary to give a number of clusters a priori. In this paper, we consider a learning-based schema for EM, and then propose a learning-based EM algorithm on mixtures of von Mises distributions for clustering grouped circular data. The proposed clustering method is without any initial and robust to outliers with automatically finding the number of clusters. Some numerical and real data sets are used to compare the proposed algorithm with existing methods. Experimental results and comparisons actually demonstrate these good aspects of effectiveness and superiority of the proposed learning-based EM algorithm.


2018 ◽  
Author(s):  
Wei Ji Ma

A common method, due to Zhang and Luck (2008), for analyzing delayed-estimation data with a circular stimulus variable is to fit a mixture of a Von Mises distribution and a uniform distribution. The uniform distribution represents random guesses, presumably made when an item is not kept in memory. When I generate synthetic data from a variable-precision model with zero guessing, the method estimates the guess rate to be nonzero and often high. This is due to model mismatch: the fitted model is not matched to the data-generating (true) model. In real data, this could be a problem if one considers the variable-precision model a plausible candidate model and draws conclusions based on the estimated guess rates. I describe five solutions to this problem: analyzing the residual, ruling out the variable-precision model, robust inference, fitting a hybrid model, and using model-free statistics. I hope that these solutions can contribute to good data analysis practices in the study of working memory.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


2020 ◽  
Author(s):  
Nicola Zoppetti ◽  
Simone Ceccherini ◽  
Flavio Barbara ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
...  

<p>Remote sounding of atmospheric composition makes use of satellite measurements with very heterogeneous characteristics. In particular, the determination of vertical profiles of gases in the atmosphere can be performed using measurements acquired in different spectral bands and with different observation geometries. The most rigorous way to combine heterogeneous measurements of the same quantity in a single Level 2 (L2) product is simultaneous retrieval. The main drawback of simultaneous retrieval is its complexity, due to the necessity to embed the forward models of different instruments into the same retrieval application. To overcome this shortcoming, we developed a data fusion method, referred to as Complete Data Fusion (CDF), to provide an efficient and adaptable alternative to simultaneous retrieval. In general, the CDF input is any number of profiles retrieved with the optimal estimation technique, characterized by their a priori information, covariance matrix (CM), and averaging kernel (AK) matrix. The output of the CDF is a single product also characterized by an a priori, a CM and an AK matrix, which collect all the available information content. To account for the geo-temporal differences and different vertical grids of the fusing profiles, a coincidence and an interpolation error have to be included in the error budget.<br>In the first part of the work, the CDF method is applied to ozone profiles simulated in the thermal infrared and ultraviolet bands, according to the specifications of the Sentinel 4 (geostationary) and Sentinel 5 (low Earth orbit) missions of the Copernicus program. The simulated data have been produced in the context of the Advanced Ultraviolet Radiation and Ozone Retrieval for Applications (AURORA) project funded by the European Commission in the framework of the Horizon 2020 program. The use of synthetic data and the assumption of negligible systematic error in the simulated measurements allow studying the behavior of the CDF in ideal conditions. The use of synthetic data allows evaluating the performance of the algorithm also in terms of differences between the products of interest and the reference truth, represented by the atmospheric scenario used in the procedure to simulate the L2 products. This analysis aims at demonstrating the potential benefits of the CDF for the synergy of products measured by different platforms in a close future realistic scenario, when the Sentinel 4, 5/5p ozone profiles will be available.<br>In the second part of this work, the CDF is applied to a set of real measurements of ozone acquired by GOME-2 onboard the MetOp-B platform. The quality of the CDF products, obtained for the first time from operational products, is compared with that of the original GOME-2 products. This aims to demonstrate the concrete applicability of the CDF to real data and its possible use to generate Level-3 (or higher) gridded products.<br>The results discussed in this presentation offer a first consolidated picture of the actual and potential value of an innovative technique for post-retrieval processing and generation of Level-3 (or higher) products from the atmospheric Sentinel data.</p>


2019 ◽  
Author(s):  
Ahmad Ilham

Determining the number of clusters k-Means is the most populer problem among data mining researchers because of the difficulty to determining information from the data a priori so that the results cluster un optimal and to be quickly trapped into local minimums. Automatic clustering method with evolutionary computation (EC) approach can solve the k-Means problem. The automatic clustering differential evolution (ACDE) method is one of the most popular methods of the EC approach because it can handle high-dimensional data and improve k-Means drafting performance with low cluster validity values. However, the process of determining k activation threshold on ACDE is still dependent on user considerations, so that the process of determining the number of k-Means clusters is not yet efficient. In this study, the ACDE problem will be improved using the u-control chart (UCC) method, which is proven to be efficiently used to solve k-Means problems automatically. The proposed method is evaluated using the state-of-the-art datasets such as synthetic data and real data (iris, glass, wine, vowel, ruspini) from UCI repository machine learning and using davies bouldin index (DBI) and cosine similarity measure (CS) as an evaluation method. The results of this study indicate that the UCC method has successfully improved the k-Means method with the lowest objective values of DBI and CS of 0.470 and 0.577 respectively. The lowest objective value of DBI and CS is the best method. The proposed method has high performance when compared with other current methods such as genetic clustering for unknown k (GCUK), dynamic clustering pso (DCPSO) and automatic clustering approach based on differential evolution algorithm combining with k-Means for crisp clustering (ACDE) for almost all DBI and CS evaluations. It can be concluded that the UCC method is able to correct the weakness of the ACDE method on determining the number of k-Means clusters by automatically determining k activation threshold


Stats ◽  
2021 ◽  
Vol 4 (3) ◽  
pp. 634-649
Author(s):  
Fernanda V. Paula ◽  
Abraão D. C. Nascimento ◽  
Getúlio J. A. Amaral ◽  
Gauss M. Cordeiro

The Cardioid (C) distribution is one of the most important models for modeling circular data. Although some of its structural properties have been derived, this distribution is not appropriate for asymmetry and multimodal phenomena in the circle, and then extensions are required. There are various general methods that can be used to produce circular distributions. This paper proposes four extensions of the C distribution based on the beta, Kumaraswamy, gamma, and Marshall–Olkin generators. We obtain a unique linear representation of their densities and some mathematical properties. Inference procedures for the parameters are also investigated. We perform two applications on real data, where the new models are compared to the C distribution and one of its extensions.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Kazuhisa Fujita

We propose a new clustering method for data in cylindrical coordinates based on the k-means. The goal of the k-means family is to maximize an optimization function, which requires a similarity. Thus, we need a new similarity to obtain the new clustering method for data in cylindrical coordinates. In this study, we first derive a new similarity for the new clustering method by assuming a particular probabilistic model. A data point in cylindrical coordinates has radius, azimuth, and height. We assume that the azimuth is sampled from a von Mises distribution and the radius and the height are independently generated from isotropic Gaussian distributions. We derive the new similarity from the log likelihood of the assumed probability distribution. Our experiments demonstrate that the proposed method using the new similarity can appropriately partition synthetic data defined in cylindrical coordinates. Furthermore, we apply the proposed method to color image quantization and show that the methods successfully quantize a color image with respect to the hue element.


2013 ◽  
Vol 31 (3) ◽  
pp. 427 ◽  
Author(s):  
Dionisio Uendro Carlos ◽  
Marco Antonio Braga ◽  
Henry F. Galbiatti ◽  
Wanderson Roberto Pereira

ABSTRACT. This paper discusses some processing techniques (all codes were implemented with open source software) developed for airborne gravity gradient systems, aiming at outlining geological features by applying mathematical formulations based on the potential field properties and its breakdown into gradiometric tensors. These techniques were applied to both synthetic and real data. These techniques applied to synthetic data allow working in a controlled environment, under- standing the different processing results and establishing a comparative parameter. These methodologies were applied to a survey area of the Quadrilátero Ferrífero to map iron ore targets, resulting in a set of very helpful and important information for geological mapping activities and a priori information for inversion geophysical models.Keywords: processing, airborne gravity gradiometry, iron ore exploration, FTG system, FALCON system. RESUMO. Neste trabalho apresentamos algumas técnicas de processamento (todos os códigos foram implementados em softwares livres) desenvolvidas para aplicação aos dados de aerogradiometria gravimétrica. Os processamentos foram aplicados tanto a dados sintéticos como a dados reais. A aplicação a dados sintéticos permite atuar em um ambiente controlado e entender o padrão resultante de cada processamento. Esses mesmos processamentos foram aplicados em uma área do Quadrilátero Ferrífero para o mapeamento de minério de ferro. Todos os resultados desses processamentos são muito úteis e importantes para o mapeamento geológicoe como informação a priori para modelos de inversão geofísica.Palavras-chave: processamento, dados de aerogradiometria gravimétrica, exploração de minério de ferro, sistema FTG, sistema FALCON.


Author(s):  
Fatin Najihah Badarisam ◽  
Adzhar Rambli ◽  
Mohammad Illyas Sidik

<span>This paper focuses on comparing two discordancy tests between robust and non-robust statistic to detect a single outlier in univariate circular data. So far, to the best author knowledge that there is no literature make a comparison between both tests of <em>RCDu Statistic</em> and </span><em><span>𝐺</span><sub><span>1</span></sub><span> Statistic</span></em><span>. The test statistics are based on the circular median and spacing theory. In addition, those statistics can detect multiple and patches outliers. The performance tests of <em>RCDu Statistic</em> and </span><em><span>𝐺</span><sub><span>1</span></sub><span> Statistic</span></em><span> are tested in outlier proportion of correct detection, masking and swamping effect. At the beginning stage, we obtained the cut-off points for the <em>RCDu Statistic</em> and </span><em><span>𝐺</span><sub><span>1</span></sub><span> Statistic</span></em><span> by applying Monte Carlo simulation studies. Then, generated sample from von Mises (VM) with the combination of sample size and concentration parameter. The estimating process of cut-off points for both statistics is repeated 3000 times at 10%, 5% and 1% upper percentiles. As a result, the <em>RCDu Statistic</em> perform well in detecting a correct single outlier. Moreover, the <em>RCDu Statistic</em> has a lower masking rate compared to </span><em><span>𝐺</span><sub><span>1</span></sub><span> Statistic</span></em><span>.  However, the </span><em><span>𝐺</span><sub><span>1</span></sub><span> Statistic</span></em><span> is better than <em>RCDu Statistic</em> for swamping effect due to a lower swamping rate. Thus, <em>RCDu Statistic</em> performs better than </span><em><span>𝐺</span><sub><span>1</span></sub><span> Statistic</span></em><span> in detecting a single outlier for von Mises (VM) sample. As an illustration, both statistics were applied to the real data set from a conducted experiments series to investigate the northen cricket frogs homing ability.</span>


Sign in / Sign up

Export Citation Format

Share Document