scholarly journals A Markov Decision Process Model for Traffic Prioritisation Provisioning

10.28945/2750 ◽  
2004 ◽  
Author(s):  
Abdullah Gani ◽  
Omar Zakaria ◽  
Nor Badrul Anuar Jumaat

This paper presents an application of Markov Decision Process (MDP) into the provision of traffic prioritisation in the best-effort networks. MDP was used because it is a standard, general formalism for modelling stochastic, sequential decision problems. The implementation of traffic prioritisation involves a series of decision making processes by which packets are marked and classified before being despatched to destinations. The application of MDP was driven by the objective of ensuring the higher priority packets are not delayed by the lower ones. The MDP is believed to be applicable in improving the traffic prioritisation arbitration.

2021 ◽  
pp. 1-16
Author(s):  
Pegah Alizadeh ◽  
Emiliano Traversi ◽  
Aomar Osmani

Markov Decision Process Models (MDPs) are a powerful tool for planning tasks and sequential decision-making issues. In this work we deal with MDPs with imprecise rewards, often used when dealing with situations where the data is uncertain. In this context, we provide algorithms for finding the policy that minimizes the maximum regret. To the best of our knowledge, all the regret-based methods proposed in the literature focus on providing an optimal stochastic policy. We introduce for the first time a method to calculate an optimal deterministic policy using optimization approaches. Deterministic policies are easily interpretable for users because for a given state they provide a unique choice. To better motivate the use of an exact procedure for finding a deterministic policy, we show some (theoretical and experimental) cases where the intuitive idea of using a deterministic policy obtained after “determinizing” the optimal stochastic policy leads to a policy far from the exact deterministic policy.


2013 ◽  
Vol 756-759 ◽  
pp. 504-508
Author(s):  
De Min Li ◽  
Jian Zou ◽  
Kai Kai Yue ◽  
Hong Yun Guan ◽  
Jia Cun Wang

Evacuation for a firefighter in complex fire scene is challenge problem. In this paper, we discuss a firefighters evacuation decision making model in ad hoc robot network on fire scene. Due to the dynamics on fire scene, we know that the sensed information in ad hoc robot network is also dynamically variance. So in this paper, we adapt dynamic decision method, Markov decision process, to model the firefighters decision making process for evacuation from fire scene. In firefighting decision making process, we know that the critical problems are how to define action space and evaluate the transition law in Markov decision process. In this paper, we discuss those problems according to the triangular sensors situation in ad hoc robot network and describe a decision making model for a firefighters evacuation the in the end.


Sign in / Sign up

Export Citation Format

Share Document