scholarly journals A Novel Method for Decoding Any High-Order Hidden Markov Model

2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Fei Ye ◽  
Yifei Wang

This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

2018 ◽  
Vol 2018 ◽  
pp. 1-15
Author(s):  
Jason Chin-Tiong Chan ◽  
Hong Choon Ong

The optimal state sequence of a generalized High-Order Hidden Markov Model (HHMM) is tracked from a given observational sequence using the classical Viterbi algorithm. This classical algorithm is based on maximum likelihood criterion. We introduce an entropy-based Viterbi algorithm for tracking the optimal state sequence of a HHMM. The entropy of a state sequence is a useful quantity, providing a measure of the uncertainty of a HHMM. There will be no uncertainty if there is only one possible optimal state sequence for HHMM. This entropy-based decoding algorithm can be formulated in an extended or a reduction approach. We extend the entropy-based algorithm for computing the optimal state sequence that was developed from a first-order to a generalized HHMM with a single observational sequence. This extended algorithm performs the computation exponentially with respect to the order of HMM. The computational complexity of this extended algorithm is due to the growth of the model parameters. We introduce an efficient entropy-based decoding algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA) to compute the optimal state sequence of any generalized HHMM. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an entropy-based decoding algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires OTN~2 calculations, where N~ is the number of states in an equivalent first-order model and T is the length of observational sequence.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Yanxue Zhang ◽  
Dongmei Zhao ◽  
Jinxing Liu

The biggest difficulty of hidden Markov model applied to multistep attack is the determination of observations. Now the research of the determination of observations is still lacking, and it shows a certain degree of subjectivity. In this regard, we integrate the attack intentions and hidden Markov model (HMM) and support a method to forecasting multistep attack based on hidden Markov model. Firstly, we train the existing hidden Markov model(s) by the Baum-Welch algorithm of HMM. Then we recognize the alert belonging to attack scenarios with the Forward algorithm of HMM. Finally, we forecast the next possible attack sequence with the Viterbi algorithm of HMM. The results of simulation experiments show that the hidden Markov models which have been trained are better than the untrained in recognition and prediction.


2018 ◽  
Vol 161 ◽  
pp. 03011
Author(s):  
Jesus Savage ◽  
Oscar Fuentes ◽  
Luis Contreras ◽  
Marco Negrete

This paper describes a map representation and localization system for a mobile robot based on Hidden Markov Models. These models are used not only to find a region where a mobile robot is, but also they find the orientation that it has. It is shown that an estimation of the region where the robot is located can be found using the Viterbi algorithm with quantized laser readings, i.e. symbol observations, of a Hidden Markov Model.


d'CARTESIAN ◽  
2015 ◽  
Vol 4 (1) ◽  
pp. 86 ◽  
Author(s):  
Kezia Tumilaar ◽  
Yohanes Langi ◽  
Altien Rindengan

Hidden Markov Models (HMM) is a stochastic model and is essentially an extension of Markov Chain. In Hidden Markov Model (HMM)  there are two types states: the observable states and the hidden states. The purpose of this research are to understand how hidden Markov model (HMM) and to understand how the solution of three basic problems on Hidden Markov Model (HMM) which consist of evaluation problem, decoding problem and learning problem.  The result of the research is hidden Markov model can be defined as . The evaluation problem or to compute probability of the observation sequence given the model P(O|) can solved  by Forward-Backward algorithm, the decoding problem or to choose hidden state sequence which is optimal can solved by Viterbi algorithm and learning problem or to estimate hidden Markov model parameter  to maximize P(O|)  can solved by Baum – Welch algorithm. From description above Hidden Markov Model  with state 3  can describe behavior  from the case studies. Key  words: Decoding Problem, Evaluation Problem, Hidden Markov Model, Learning Problem


In this chapter, the authors provide the formalization of extended DTMC models, namely Hidden Markov Models (HMMs), which are the core concept for formally evaluating the probability of the occurrence of a particular observed sequence and finding the best state sequence to generate given observation (Mantyla & Tutkimuskeskus, 2001; Rabiner, 1990). In order to present the usefulness of the formalization of HMM and the formal verification of HMM properties, the authors illustrate the formal analysis of a DNA (Deoxyribon Nucleic Acid) sequence at the end of the chapter.


Author(s):  
Maria Titah Jatipaningrum ◽  
Kris Suryowati ◽  
Libertania Maria Melania Esti Un

Hidden Markov model is a development of the Markov chain where the state cannot be observed directly (hidden), but can only be observed, a set of other observations and combination of fuzzy logic and Markov chain to predict Rupiah exchange rate against the Dollar. The exchange rate of purchasing and exchange rate of saling is divided into four states, namely down large, down small, small rise, and large rise are symbolized respectively S1, S2, S3, and S4. Probability of sequences of observation for 3 days later is computed by forwarding and Backward Algorithm, determine the hidden state sequence using the viterbi algorithm and estimate the HMM parameters using the Baum Welch algorithm. The MAPE result exchange rate of purchase of FTS-Markov Chain is 1,355% and the exchange rate of sale of FTS-Markov Chain is 1,317%. The sequences of observation which optimized within exchange rate of  purchase is X* = {S3,S3,S3}, within exchange rate of sale is also X* = {S3,S3,S3}. Keywords: Exchange rate, FTS-Markov Chain, Hidden Markov Model


2021 ◽  
Vol 11 (7) ◽  
pp. 3138
Author(s):  
Mingchi Zhang ◽  
Xuemin Chen ◽  
Wei Li

In this paper, a deep neural network hidden Markov model (DNN-HMM) is proposed to detect pipeline leakage location. A long pipeline is divided into several sections and the leakage occurs in different section that is defined as different state of hidden Markov model (HMM). The hybrid HMM, i.e., DNN-HMM, consists of a deep neural network (DNN) with multiple layers to exploit the non-linear data. The DNN is initialized by using a deep belief network (DBN). The DBN is a pre-trained model built by stacking top-down restricted Boltzmann machines (RBM) that compute the emission probabilities for the HMM instead of Gaussian mixture model (GMM). Two comparative studies based on different numbers of states using Gaussian mixture model-hidden Markov model (GMM-HMM) and DNN-HMM are performed. The accuracy of the testing performance between detected state sequence and actual state sequence is measured by micro F1 score. The micro F1 score approaches 0.94 for GMM-HMM method and it is close to 0.95 for DNN-HMM method when the pipeline is divided into three sections. In the experiment that divides the pipeline as five sections, the micro F1 score for GMM-HMM is 0.69, while it approaches 0.96 with DNN-HMM method. The results demonstrate that the DNN-HMM can learn a better model of non-linear data and achieve better performance compared to GMM-HMM method.


2016 ◽  
Vol 19 (58) ◽  
pp. 1
Author(s):  
Daniel Fernando Tello Gamarra

We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM). We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.


2019 ◽  
Vol 24 (1) ◽  
pp. 14 ◽  
Author(s):  
Luis Acedo

Hidden Markov models are a very useful tool in the modeling of time series and any sequence of data. In particular, they have been successfully applied to the field of mathematical linguistics. In this paper, we apply a hidden Markov model to analyze the underlying structure of an ancient and complex manuscript, known as the Voynich manuscript, which remains undeciphered. By assuming a certain number of internal states representations for the symbols of the manuscripts, we train the network by means of the α and β -pass algorithms to optimize the model. By this procedure, we are able to obtain the so-called transition and observation matrices to compare with known languages concerning the frequency of consonant andvowel sounds. From this analysis, we conclude that transitions occur between the two states with similar frequencies to other languages. Moreover, the identification of the vowel and consonant sounds matches some previous tentative bottom-up approaches to decode the manuscript.


2000 ◽  
Vol 23 (4) ◽  
pp. 494-495
Author(s):  
Ingmar Visser

Page's manifesto makes a case for localist representations in neural networks, one of the advantages being ease of interpretation. However, even localist networks can be hard to interpret, especially when at some hidden layer of the network distributed representations are employed, as is often the case. Hidden Markov models can be used to provide useful interpretable representations.


Sign in / Sign up

Export Citation Format

Share Document