scholarly journals Hidden Markov models applied to on-line handwritten isolated character recognition

1994 ◽  
Vol 3 (3) ◽  
pp. 314-318 ◽  
Author(s):  
S.R. Veltman ◽  
R. Prasad
Author(s):  
J.C. ANIGBOGU ◽  
A. BELAÏD

A multi-level multifont character recognition is presented. The system proceeds by first delimiting the context of the characters. As a way of enhancing system performance, typographical information is extracted and used for font identification before actual character recognition is performed. This has the advantage of sure character identification as well as text reproduction in its original form. The font identification is based on decision trees where the characters are automatically arranged differently in confusion classes according to the physical characteristics of fonts. The character recognizers are built around the first and second order hidden Markov models (HMM) as well as Euclidean distance measures. The HMMs use the Viterbi and the Extended Viterbi algorithms to which enhancements were made. Also present is a majority-vote system that polls the other systems for “advice” before deciding on the identity of a character. Among other things, this last system is shown to give better results than each of the other systems applied individually. The system finally uses combinations of stochastic and dictionary verification methods for word recognition and error-correction.


1994 ◽  
Vol 6 (2) ◽  
pp. 307-318 ◽  
Author(s):  
Pierre Baldi ◽  
Yves Chauvin

A simple learning algorithm for Hidden Markov Models (HMMs) is presented together with a number of variations. Unlike other classical algorithms such as the Baum-Welch algorithm, the algorithms described are smooth and can be used on-line (after each example presentation) or in batch mode, with or without the usual Viterbi most likely path approximation. The algorithms have simple expressions that result from using a normalized-exponential representation for the HMM parameters. All the algorithms presented are proved to be exact or approximate gradient optimization algorithms with respect to likelihood, log-likelihood, or cross-entropy functions, and as such are usually convergent. These algorithms can also be casted in the more general EM (Expectation-Maximization) framework where they can be viewed as exact or approximate GEM (Generalized Expectation-Maximization) algorithms. The mathematical properties of the algorithms are derived in the appendix.


Sign in / Sign up

Export Citation Format

Share Document