scholarly journals Modeling time series by aggregating multiple fuzzy cognitive maps

2021 ◽  
Vol 7 ◽  
pp. e726
Author(s):  
Tianming Yu ◽  
Qunfeng Gan ◽  
Guoliang Feng

Background The real time series is affected by various combinations of influences, consequently, it has a variety of variation modality. It is hard to reflect the variation characteristic of the time series accurately when simulating time series only by a single model. Most of the existing methods focused on numerical prediction of time series. Also, the forecast uncertainty of time series is resolved by the interval prediction. However, few researches focus on making the model interpretable and easily comprehended by humans. Methods To overcome this limitation, a new prediction modelling methodology based on fuzzy cognitive maps is proposed. The bootstrap method is adopted to select multiple sub-sequences at first. As a result, the variation modality are contained in these sub-sequences. Then, the fuzzy cognitive maps are constructed in terms of these sub-sequences, respectively. Furthermore, these fuzzy cognitive maps models are merged by means of granular computing. The established model not only performs well in numerical and interval predictions but also has better interpretability. Results Experimental studies involving both synthetic and real-life datasets demonstrate the usefulness and satisfactory efficiency of the proposed approach.

Author(s):  
M. Shamim Khan ◽  
◽  
Alex Chong ◽  
Tom Gedeon

Differential Hebbian Learning (DHL) was proposed by Kosko as an unsupervised learning scheme for Fuzzy Cognitive Maps (FCMs). DHL can be used with a sequence of state vectors to adapt the causal link strengths of an FCM. However, it does not guarantee learning of the sequence by the FCM and no concrete procedures for the use of DHL has been developed. In this paper a formal methodology is proposed for using DHL in the development of FCMs in a decision support context. The four steps in the methodology are: (1) Creation of a crisp cognitive map; (2) Identification of event sequences for use in DHL; (3) Event sequence encoding using DHL; (4) Revision of the trained FCM. Feasibility of the proposed methodology is demonstrated with an example involving a dynamic system with feedback based on a real-life scenario.


Author(s):  
Dang Kien Cuong ◽  
Duong Ton Dam ◽  
Duong Ton Thai Duong

The bootstrap is one of the method of studying statistical math which this article uses it but is a major tool for studying and evaluating the values of parameters in probability distribution. Overview of the theory of infinite distribution functions. The tool to deal with the problems raised in the paper is the mathematical methods of random analysis by theory of random process and multivariate statistics. Observations (realisations of a stationary process) are not independent, but dependence in time series is relatively simple example of dependent data. Through a simulation study we found that the pseudo data generated from the bootstrap method always showed a weaker dependence among the observations than the time series they were sampled from, hence we can draw the conclusion that even by re-sampling blocks instead of single observations we will lose some of structural from of the original sample. A potential difficulty by the using of likelihood methods for the GEV concerns the regularity conditions that are required for the usual asymptotic properties associated with the maximum likelihood estimator to be valid. To estimate the value of a parameter in GEV we can use classical methods of mathematical statistics such as the maximum likelihood method or the least squares method, but they all require a certain number samples for verification. For the bootstrap method, this is obviously not needed; here we use the limit theorems of probability theory and multivariate statistics to solve the problem even if there is only one sample data. That is the important practical significance that our paper wants to convey. In predictive analysis problems, in case the actual data is incomplete, not long enough, we can use bootstrap to add data.


2021 ◽  
Author(s):  
Marzieh Emadi ◽  
Farsad Zamani Boroujeni ◽  
jamshid Pirgazi

Abstract Recently with the advancement of high-throughput sequencing, gene regulatory network inference has turned into an interesting subject in bioinformatics and system biology. But there are many challenges in the field such as noisy data, uncertainty, time-series data with numerous gene numbers and low data, time complexity and so on. In recent years, many research works have been conducted to tackle these challenges, resulting in different methods in gene regulatory networks inference. A number of models have been used in modeling of the gene regulatory networks including Boolean networks, Bayesian networks, Markov model, relational networks, state space model, differential equations model, artificial neural networks and so on. In this paper, the fuzzy cognitive maps are used to model gene regulatory networks because of their dynamic nature and learning capabilities for handling non-linearity and inherent uncertainty. Fuzzy cognitive maps belong to the family of recurrent networks and are well-suited for gene regulatory networks. In this research study, the Kalman filtered compressed sensing is used to infer the fuzzy cognitive map for the gene regulatory networks. This approach, using the advantages of compressed sensing and Kalman filters, allows robustness to noise and learning of sparse gene regulatory networks from data with high gene number and low samples. In the proposed method, stream data and previous knowledge can be used in the inference process. Furthermore, compressed sensing finds likely edges and Kalman filter estimates their weights. The proposed approach uses a novel method to decrease the noise of data. The proposed method is compared to CSFCM, LASSOFCM, KFRegular, ABC, RCGA, ICLA, and CMI2NI. The results show that the proposed approach is superior to the other approaches in fuzzy cognitive maps learning. This behavior is related to the stability against noise and offers a proper balance between data error and network structure.


Sign in / Sign up

Export Citation Format

Share Document