scholarly journals Doppler Velocity Estimation of Overlapping Linear-Period-Modulated Ultrasonic Waves Based on an Expectation-Maximization Algorithm

2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Natee Thong-un ◽  
Minoru K. Kurosawa

The occurrence of an overlapping signal is a significant problem in performing multiple objects localization. Doppler velocity is sensitive to the echo shape and is also able to be connected to the physical properties of moving objects, especially for a pulse compression ultrasonic signal. The expectation-maximization (EM) algorithm has the ability to achieve signal separation. Thus, applying the EM algorithm to the overlapping pulse compression signals is of interest. This paper describes a proposed method, based on the EM algorithm, of Doppler velocity estimation for overlapping linear-period-modulated (LPM) ultrasonic signals. Simulations are used to validate the proposed method.

2006 ◽  
pp. 57-64 ◽  
Author(s):  
A. Uribe ◽  
R. Barrera ◽  
E. Brieva

The EM algorithm is a powerful tool to solve the membership problem in open clusters when a mixture density model overlaping two heteroscedastic bivariate normal components is built to fit the cloud of relative proper motions of the stars in a region of the sky where a cluster is supposed to be. A membership study of 1866 stars located in the region of the very old open cluster M67 is carried out via the Expectation Maximization algorithm using the McLachlan, Peel, Basford and Adams EMMIX software.


2013 ◽  
Vol 12 (03) ◽  
pp. 1350012 ◽  
Author(s):  
OSONDE OSOBA ◽  
SANYA MITAIM ◽  
BART KOSKO

We present a noise-injected version of the expectation–maximization (EM) algorithm: the noisy expectation–maximization (NEM) algorithm. The NEM algorithm uses noise to speed up the convergence of the EM algorithm. The NEM theorem shows that additive noise speeds up the average convergence of the EM algorithm to a local maximum of the likelihood surface if a positivity condition holds. Corollary results give special cases when noise improves the EM algorithm. We demonstrate these noise benefits on EM algorithms for three data models: the Gaussian mixture model (GMM), the Cauchy mixture model (CMM), and the censored log-convex gamma model. The NEM positivity condition simplifies to a quadratic inequality in the GMM and CMM cases. A final theorem shows that the noise benefit for independent identically distributed additive noise decreases with sample size in mixture models. This theorem implies that the noise benefit is most pronounced if the data is sparse.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5549
Author(s):  
Ossi Kaltiokallio ◽  
Roland Hostettler ◽  
Hüseyin Yiğitler ◽  
Mikko Valkama

Received signal strength (RSS) changes of static wireless nodes can be used for device-free localization and tracking (DFLT). Most RSS-based DFLT systems require access to calibration data, either RSS measurements from a time period when the area was not occupied by people, or measurements while a person stands in known locations. Such calibration periods can be very expensive in terms of time and effort, making system deployment and maintenance challenging. This paper develops an Expectation-Maximization (EM) algorithm based on Gaussian smoothing for estimating the unknown RSS model parameters, liberating the system from supervised training and calibration periods. To fully use the EM algorithm’s potential, a novel localization-and-tracking system is presented to estimate a target’s arbitrary trajectory. To demonstrate the effectiveness of the proposed approach, it is shown that: (i) the system requires no calibration period; (ii) the EM algorithm improves the accuracy of existing DFLT methods; (iii) it is computationally very efficient; and (iv) the system outperforms a state-of-the-art adaptive DFLT system in terms of tracking accuracy.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Xianghui Yuan ◽  
Feng Lian ◽  
Chongzhao Han

Tracking target with coordinated turn (CT) motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT) model with known turn rate, augmented coordinated turn (ACT) model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM) framework, the algorithm based on expectation maximization (EM) algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM) algorithm, the EM algorithm shows its effectiveness.


Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 373
Author(s):  
Branislav Panić ◽  
Jernej Klemenc ◽  
Marko Nagode

A commonly used tool for estimating the parameters of a mixture model is the Expectation–Maximization (EM) algorithm, which is an iterative procedure that can serve as a maximum-likelihood estimator. The EM algorithm has well-documented drawbacks, such as the need for good initial values and the possibility of being trapped in local optima. Nevertheless, because of its appealing properties, EM plays an important role in estimating the parameters of mixture models. To overcome these initialization problems with EM, in this paper, we propose the Rough-Enhanced-Bayes mixture estimation (REBMIX) algorithm as a more effective initialization algorithm. Three different strategies are derived for dealing with the unknown number of components in the mixture model. These strategies are thoroughly tested on artificial datasets, density–estimation datasets and image–segmentation problems and compared with state-of-the-art initialization methods for the EM. Our proposal shows promising results in terms of clustering and density-estimation performance as well as in terms of computational efficiency. All the improvements are implemented in the rebmix R package.


2016 ◽  
Vol 15 (01) ◽  
pp. 1650007 ◽  
Author(s):  
Osonde Osoba ◽  
Bart Kosko

We generalize the noisy expectation-maximization (NEM) algorithm to allow arbitrary modes of noise injection besides just adding noise to the data. The noise must still satisfy a NEM positivity condition. This generalization includes the important special case of multiplicative noise injection. A generalized NEM theorem shows that all measurable modes of injecting noise will speed the average convergence of the EM algorithm if the noise satisfies a generalized NEM positivity condition. This noise-benefit condition has a simple quadratic form for Gaussian and Cauchy mixture models in the case of multiplicative noise injection. Simulations show a multiplicative-noise EM speed-up of more than [Formula: see text] in a simple Gaussian mixture model. Injecting blind noise only slowed convergence. A related theorem gives a sufficient condition for an average EM noise benefit for arbitrary modes of noise injection if the data model comes from the general exponential family of probability density functions. A final theorem shows that injected noise slows EM convergence on average if the NEM inequalities reverse and the noise satisfies a negativity condition.


2021 ◽  
Vol 5 (2) ◽  
pp. 94-105
Author(s):  
Muhammad Danial Romadloni ◽  
Indra Gita Anugrah

Movies are very familiar to everyone, from children, adolescents to adults, whether just because they want to watch, a hobby, or fill their spare time. Movies that used to be watched only on television and had to wait months after release or directly to the cinema, with the development of technology, of course, it is increasingly easier for everyone to enjoy movies, now they can be watched through paid television services to smartphones. One of the websites that viewers often use to review movies they have watched is IMDb. The data review can be used to get an opinion or opinion mining from the audience, whether the title of the movie being reviewed is good or not. One of the algorithms that are often used is Naïve Bayes, apart from being easy to implement, Naïve Bayes is also known to be very fast and easy to use to predict classes on a test dataset. The purpose of this study is to see how much influence the Expectation-Maximization to increase accuracy on implementation of Expectation-Maximization algorithm in opinion mining movies review case studies. From the results of this study using the Expectation-Maximization method, it was found that the accuracy increased by 4% compared to using only Naïve Bayes.


Author(s):  
Chandan K. Reddy ◽  
Bala Rajaratnam

In the field of statistical data mining, the Expectation Maximization (EM) algorithm is one of the most popular methods used for solving parameter estimation problems in the maximum likelihood (ML) framework. Compared to traditional methods such as steepest descent, conjugate gradient, or Newton-Raphson, which are often too complicated to use in solving these problems, EM has become a popular method because it takes advantage of some problem specific properties (Xu et al., 1996). The EM algorithm converges to the local maximum of the log-likelihood function under very general conditions (Demspter et al., 1977; Redner et al., 1984). Efficiently maximizing the likelihood by augmenting it with latent variables and guarantees of convergence are some of the important hallmarks of the EM algorithm. EM based methods have been applied successfully to solve a wide range of problems that arise in fields of pattern recognition, clustering, information retrieval, computer vision, bioinformatics (Reddy et al., 2006; Carson et al., 2002; Nigam et al., 2000), etc. Given an initial set of parameters, the EM algorithm can be implemented to compute parameter estimates that locally maximize the likelihood function of the data. In spite of its strong theoretical foundations, its wide applicability and important usage in solving some real-world problems, the standard EM algorithm suffers from certain fundamental drawbacks when used in practical settings. Some of the main difficulties of using the EM algorithm on a general log-likelihood surface are as follows (Reddy et al., 2008): • EM algorithm for mixture modeling converges to a local maximum of the log-likelihood function very quickly. • There are many other promising local optimal solutions in the close vicinity of the solutions obtained from the methods that provide good initial guesses of the solution. • Model selection criterion usually assumes that the global optimal solution of the log-likelihood function can be obtained. However, achieving this is computationally intractable. • Some regions in the search space do not contain any promising solutions. The promising and nonpromising regions co-exist and it becomes challenging to avoid wasting computational resources to search in non-promising regions. Of all the concerns mentioned above, the fact that most of the local maxima are not distributed uniformly makes it important to develop algorithms that not only help in avoiding some inefficient search over the lowlikelihood regions but also emphasize the importance of exploring promising subspaces more thoroughly (Zhang et al, 2004). This subspace search will also be useful for making the solution less sensitive to the initial set of parameters. In this chapter, we will discuss the theoretical aspects of the EM algorithm and demonstrate its use in obtaining the optimal estimates of the parameters for mixture models. We will also discuss some of the practical concerns of using the EM algorithm and present a few results on the performance of various algorithms that try to address these problems.


Author(s):  
Arwa Hatem Alqudsi ◽  
Nazlia Omar ◽  
Rabha W. Ibrahim

<p><strong> </strong>It is practically impossible for pure machine translation approach to process all of translation problems; however, Rule Based Machine Translation and Statistical Machine translation (RBMT and SMT) use different architectures for performing translation task. Lexical analyser and syntactic analyser are solved by Rule Based and some amount of ambiguity is left to be solved by Expectation–Maximization (EM) algorithm, which is an iterative statistic algorithm for finding maximum likelihood. In this paper we have proposed an integrated Hybrid Machine Translation (HMT) system. The goal is to combine the best properties of each approach. Initially, Arabic text is keyed into RBMT; then the output will be edited by EM algorithm to generate the final translation of English text. As we have seen in previous works, the performance and enhancement of EM algorithm, the key of EM algorithm performance is the ability to accurately transform a frequency from one language to another. Results showing that, as proved by BLEU system, the proposed method can substantially outperform standard Rule Based approach and EM algorithm in terms of frequency and accuracy. The results of this study have been showed that the score of HMT system is higher than SMT system in all cases. When combining two approaches, HMT outperformed SMT in Bleu score.</p>


Sign in / Sign up

Export Citation Format

Share Document