scholarly journals Forecast of Freight Volume in Xi’an Based on Gray GM (1, 1) Model and Markov Forecasting Model

2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Fan Yang ◽  
Xiaoying Tang ◽  
Yingxin Gan ◽  
Xindan Zhang ◽  
Jianchang Li ◽  
...  

Due to the continuous improvement of productivity, the transportation demand of freight volume is also increasing. It is difficult to organize freight transportation efficiently when the freight volume is quite large. Therefore, predicting the total amount of goods transported is essential in order to ensure efficient and orderly transportation. Aiming at optimizing the forecast of freight volume, this paper predicts the freight volume in Xi’an based on the Gray GM (1, 1) model and Markov forecasting model. Firstly, the Gray GM (1, 1) model is established based on related freight volume data of Xi’an from 2000 to 2008. Then, the corresponding time sequence and expression of restore value of Xi’an freight volume can be attained by determining parameters, so as to obtain the gray forecast values of Xi’an’s freight volume from 2009 to 2013. In combination with the Markov chain process, the random sequence state is divided into three categories. By determining the state transition probability matrix, the probability value of the sequence in each state and the predicted median value corresponding to each state can be obtained. Finally, the revised predicted values of the freight volume based on the Gray–Markov forecasting model in Xi’an from 2009 to 2013 are calculated. It is proved in theory and practice that the Gray–Markov forecasting model has high accuracy and can provide relevant policy bases for the traffic management department of Xi’an.

2013 ◽  
Vol 634-638 ◽  
pp. 819-824
Author(s):  
Zhi Gang Zhang

Firstly a GM(1,1) is built to get the dynamic base line for the coal bed methane dynamic productivity of coal mine area. Secondly on the basis of the GM(1,1), Markov chain is applied to achieve state transition probability matrix. Thirdly the coal bed methane dynamic productivity of coal mine area interval is forecasted and analyzed in the form of probability by the system state classification, the calculation of the residue between true value and model fitting value and the standardization of deviation of the residue. It's proved in theory and practice that the forecast results not only are more reliable but also can help the decision maker with grasping the coal bed methane dynamic productivity of coal mine area development tendency in general and making proper decision. Results show that the Grey Markov Model has higher accuracy than that of GM(1,1) model.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Shuang Ma ◽  
Dan Dang ◽  
Wenxue Wang ◽  
Yuechao Wang ◽  
Lianqing Liu

Abstract Background Combinatorial drug therapy for complex diseases, such as HSV infection and cancers, has a more significant efficacy than single-drug treatment. However, one key challenge is how to effectively and efficiently determine the optimal concentrations of combinatorial drugs because the number of drug combinations increases exponentially with the types of drugs. Results In this study, a searching method based on Markov chain is presented to optimize the combinatorial drug concentrations. In this method, the searching process of the optimal drug concentrations is converted into a Markov chain process with state variables representing all possible combinations of discretized drug concentrations. The transition probability matrix is updated by comparing the drug responses of the adjacent states in the network of the Markov chain and the drug concentration optimization is turned to seek the state with maximum value in the stationary distribution vector. Its performance is compared with five stochastic optimization algorithms as benchmark methods by simulation and biological experiments. Both simulation results and experimental data demonstrate that the Markov chain-based approach is more reliable and efficient in seeking global optimum than the benchmark algorithms. Furthermore, the Markov chain-based approach allows parallel implementation of all drug testing experiments, and largely reduces the times in the biological experiments. Conclusion This article provides a versatile method for combinatorial drug screening, which is of great significance for clinical drug combination therapy.


1969 ◽  
Vol 6 (03) ◽  
pp. 478-492 ◽  
Author(s):  
William E. Wilkinson

Consider a discrete time Markov chain {Zn } whose state space is the non-negative integers and whose transition probability matrix ║Pij ║ possesses the representation where {Pr }, r = 1,2,…, is a finite or denumerably infinite sequence of non-negative real numbers satisfying , and , is a corresponding sequence of probability generating functions. It is assumed that Z 0 = k, a finite positive integer.


Author(s):  
Shuai Ling ◽  
Shoufeng Ma ◽  
Ning Jia

AbstractThe rapid development of economics requires highly efficient and environment-friendly urban transportation systems. Such requirement presents challenges in sustainable urban transportation. The analysis and understanding of transportation-related behaviors provide one approach to dealing with complicated transportation activities. In this study, the management of traffic systems is divided into four levels with a structural and systematic perspective. Then, several special cases from the perspective of behavior, including purchasing behaviors toward new energy vehicles, choice behaviors toward green travel, and behavioral reactions toward transportation demand management policies, are investigated. Several management suggestions are proposed for transportation authorities to improve sustainable traffic management.


2021 ◽  
pp. 107754632198920
Author(s):  
Zeinab Fallah ◽  
Mahdi Baradarannia ◽  
Hamed Kharrati ◽  
Farzad Hashemzadeh

This study considers the designing of the H ∞ sliding mode controller for a singular Markovian jump system described by discrete-time state-space realization. The system under investigation is subject to both matched and mismatched external disturbances, and the transition probability matrix of the underlying Markov chain is considered to be partly available. A new sufficient condition is developed in terms of linear matrix inequalities to determine the mode-dependent parameter of the proposed quasi-sliding surface such that the stochastic admissibility with a prescribed H ∞ performance of the sliding mode dynamics is guaranteed. Furthermore, the sliding mode controller is designed to assure that the state trajectories of the system will be driven onto the quasi-sliding surface and remain in there afterward. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design algorithms.


Author(s):  
Jin Zhu ◽  
Kai Xia ◽  
Geir E Dullerud

Abstract This paper investigates the quadratic optimal control problem for constrained Markov jump linear systems with incomplete mode transition probability matrix (MTPM). Considering original system mode is not accessible, observed mode is utilized for asynchronous controller design where mode observation conditional probability matrix (MOCPM), which characterizes the emission between original modes and observed modes is assumed to be partially known. An LMI optimization problem is formulated for such constrained hidden Markov jump linear systems with incomplete MTPM and MOCPM. Based on this, a feasible state-feedback controller can be designed with the application of free-connection weighting matrix method. The desired controller, dependent on observed mode, is an asynchronous one which can minimize the upper bound of quadratic cost and satisfy restrictions on system states and control variables. Furthermore, clustering observation where observed modes recast into several clusters, is explored for simplifying the computational complexity. Numerical examples are provided to illustrate the validity.


2016 ◽  
Vol 138 (6) ◽  
Author(s):  
Thai Duong ◽  
Duong Nguyen-Huu ◽  
Thinh Nguyen

Markov decision process (MDP) is a well-known framework for devising the optimal decision-making strategies under uncertainty. Typically, the decision maker assumes a stationary environment which is characterized by a time-invariant transition probability matrix. However, in many real-world scenarios, this assumption is not justified, thus the optimal strategy might not provide the expected performance. In this paper, we study the performance of the classic value iteration algorithm for solving an MDP problem under nonstationary environments. Specifically, the nonstationary environment is modeled as a sequence of time-variant transition probability matrices governed by an adiabatic evolution inspired from quantum mechanics. We characterize the performance of the value iteration algorithm subject to the rate of change of the underlying environment. The performance is measured in terms of the convergence rate to the optimal average reward. We show two examples of queuing systems that make use of our analysis framework.


Sign in / Sign up

Export Citation Format

Share Document