Nonlinear Markov process amplitude EEG model for nonlinear coupling interaction of spontaneous EEG

2000 ◽  
Vol 47 (9) ◽  
pp. 1141-1146 ◽  
Author(s):  
Ou Bai ◽  
M. Nakamura ◽  
A. Ikeda ◽  
H. Shibasaki
2016 ◽  
Vol 4 (1) ◽  
Author(s):  
Quan-Lin Li

AbstractBig networks express multiple classes of large-scale networks in many practical areas such as computer networks, internet of things, cloud computation, manufacturing systems, transportation networks, and healthcare systems. This paper analyzes such big networks, and applies the mean-field theory and the nonlinear Markov processes to constructing a broad class of nonlinear continuous-time block-structured Markov processes, which can be used to deal with many practical stochastic systems. Firstly, a nonlinear Markov process is derived from a large number of big networks with weak interactions, where each big network is described as a continuous-time block-structured Markov process. Secondly, some effective algorithms are given for computing the fixed points of the nonlinear Markov process by means of the UL-type RG-factorization. Finally, the Birkhoff center, the locally stable fixed points, the Lyapunov functions and the relative entropy are developed to analyze stability or metastability of the system of weakly interacting big networks, and several interesting open problems are proposed with detailed interpretation. We believe that the methodology and results given in this paper can be useful and effective in the study of big networks.


Author(s):  
M. V. Noskov ◽  
M. V. Somova ◽  
I. M. Fedotova

The article proposes a model for forecasting the success of student’s learning. The model is a Markov process with continuous time, such as the process of “death and reproduction”. As the parameters of the process, the intensities of the processes of obtaining and assimilating information are offered, and the intensity of the process of assimilating information takes into account the attitude of the student to the subject being studied. As a result of applying the model, it is possible for each student to determine the probability of a given formation of ownership of the material being studied in the near future. Thus, in the presence of an automated information system of the university, the implementation of the model is an element of the decision support system by all participants in the educational process. The examples given in the article are the results of an experiment conducted at the Institute of Space and Information Technologies of Siberian Federal University under conditions of blended learning, that is, under conditions when classroom work is accompanied by independent work with electronic resources.


2011 ◽  
Vol 3 (6) ◽  
pp. 99-103
Author(s):  
M. P. Rajakumar M. P. Rajakumar ◽  
◽  
Dr. V. Shanthi Dr. V. Shanthi

Genetics ◽  
1974 ◽  
Vol 76 (2) ◽  
pp. 367-377
Author(s):  
Takeo Maruyama

ABSTRACT A Markov process (chain) of gene frequency change is derived for a geographically-structured model of a population. The population consists of colonies which are connected by migration. Selection operates in each colony independently. It is shown that there exists a stochastic clock that transforms the originally complicated process of gene frequency change to a random walk which is independent of the geographical structure of the population. The time parameter is a local random time that is dependent on the sample path. In fact, if the alleles are selectively neutral, the time parameter is exactly equal to the sum of the average local genetic variation appearing in the population, and otherwise they are approximately equal. The Kolmogorov forward and backward equations of the process are obtained. As a limit of large population size, a diffusion process is derived. The transition probabilities of the Markov chain and of the diffusion process are obtained explicitly. Certain quantities of biological interest are shown to be independent of the population structure. The quantities are the fixation probability of a mutant, the sum of the average local genetic variation and the variation summed over the generations in which the gene frequency in the whole population assumes a specified value.


Author(s):  
UWE FRANZ

We show how classical Markov processes can be obtained from quantum Lévy processes. It is shown that quantum Lévy processes are quantum Markov processes, and sufficient conditions for restrictions to subalgebras to remain quantum Markov processes are given. A classical Markov process (which has the same time-ordered moments as the quantum process in the vacuum state) exists whenever we can restrict to a commutative subalgebra without losing the quantum Markov property.8 Several examples, including the Azéma martingale, with explicit calculations are presented. In particular, the action of the generator of the classical Markov processes on polynomials or their moments are calculated using Hopf algebra duality.


Sign in / Sign up

Export Citation Format

Share Document