Convergence Properties of a Computational Learning Model for Unknown Markov Chains

Author(s):  
Andreas A. Malikopoulos

The increasing complexity of engineering systems has motivated continuing research on computational learning methods toward making autonomous intelligent systems that can learn how to improve their performance over time while interacting with their environment. These systems need not only to sense their environment, but also to integrate information from the environment into all decision-makings. The evolution of such systems is modeled as an unknown controlled Markov chain. In a previous research, the predictive optimal decision-making (POD) model was developed, aiming to learn in real time the unknown transition probabilities and associated costs over a varying finite time horizon. In this paper, the convergence of the POD to the stationary distribution of a Markov chain is proven, thus establishing the POD as a robust model for making autonomous intelligent systems. This paper provides the conditions that the POD can be valid, and be an interpretation of its underlying structure.

Author(s):  
Andreas A. Malikopoulos

The increasing complexity of engineering systems has motivated continuing research on computational learning methods towards making autonomous intelligent systems that can learn how to improve their performance over time while interacting with their environment. These systems need not only to be able to sense their environment, but should also integrate information from the environment into all decision making. The evolution of such systems is modeled as an unknown controlled Markov chain. In previous research, the predictive optimal decision-making (POD) model was developed that aims to learn in real time the unknown transition probabilities and associated costs over a varying finite time horizon. In this paper, the convergence of POD to the stationary distribution of a Markov chain is proven, thus establishing POD as a robust model for making autonomous intelligent systems. The paper provides the conditions that POD can be valid, and an interpretation of its underlying structure.


Author(s):  
R. Jamuna

CpG islands (CGIs) play a vital role in genome analysis as genomic markers.  Identification of the CpG pair has contributed not only to the prediction of promoters but also to the understanding of the epigenetic causes of cancer. In the human genome [1] wherever the dinucleotides CG occurs the C nucleotide (cytosine) undergoes chemical modifications. There is a relatively high probability of this modification that mutates C into a T. For biologically important reasons the mutation modification process is suppressed in short stretches of the genome, such as ‘start’ regions. In these regions [2] predominant CpG dinucleotides are found than elsewhere. Such regions are called CpG islands. DNA methylation is an effective means by which gene expression is silenced. In normal cells, DNA methylation functions to prevent the expression of imprinted and inactive X chromosome genes. In cancerous cells, DNA methylation inactivates tumor-suppressor genes, as well as DNA repair genes, can disrupt cell-cycle regulation. The most current methods for identifying CGIs suffered from various limitations and involved a lot of human interventions. This paper gives an easy searching technique with data mining of Markov Chain in genes. Markov chain model has been applied to study the probability of occurrence of C-G pair in the given   gene sequence. Maximum Likelihood estimators for the transition probabilities for each model and analgously for the  model has been developed and log odds ratio that is calculated estimates the presence or absence of CpG is lands in the given gene which brings in many  facts for the cancer detection in human genome.


Stat ◽  
2021 ◽  
Author(s):  
Hengrui Cai ◽  
Rui Song ◽  
Wenbin Lu

Risks ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 37
Author(s):  
Manuel L. Esquível ◽  
Gracinda R. Guerreiro ◽  
Matilde C. Oliveira ◽  
Pedro Corte Real

We consider a non-homogeneous continuous time Markov chain model for Long-Term Care with five states: the autonomous state, three dependent states of light, moderate and severe dependence levels and the death state. For a general approach, we allow for non null intensities for all the returns from higher dependence levels to all lesser dependencies in the multi-state model. Using data from the 2015 Portuguese National Network of Continuous Care database, as the main research contribution of this paper, we propose a method to calibrate transition intensities with the one step transition probabilities estimated from data. This allows us to use non-homogeneous continuous time Markov chains for modeling Long-Term Care. We solve numerically the Kolmogorov forward differential equations in order to obtain continuous time transition probabilities. We assess the quality of the calibration using the Portuguese life expectancies. Based on reasonable monthly costs for each dependence state we compute, by Monte Carlo simulation, trajectories of the Markov chain process and derive relevant information for model validation and premium calculation.


2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


Sign in / Sign up

Export Citation Format

Share Document