Impact of IP Layer Routing Policy on Multilayer Design [Invited]

2015 ◽  
Vol 7 (3) ◽  
pp. A396 ◽  
Author(s):  
E. Palkopoulou ◽  
O. Gerstel ◽  
I. Stiakogiannakis ◽  
T. Telkamp ◽  
V. López ◽  
...  
2021 ◽  
Author(s):  
Y. Qu ◽  
J. Tantsura ◽  
A. Lindem ◽  
X. Liu
Keyword(s):  

2014 ◽  
Vol 587-589 ◽  
pp. 1854-1857
Author(s):  
Yi Yong Pan

This paper addresses adaptive reliable shortest path problem which aims to find adaptive en-route guidance to maximize the reliability of arriving on time in stochastic networks. Such routing policy helps travelers better plan their trips to prepare for the risk of running late in the face of stochastic travel times. In order to reflect the stochastic characteristic of travel times, a traffic network is modeled as a discrete stochastic network. Adaptive reliable shortest path problem is uniformly defined in a stochastic network. Bellman’s Principle that is the core of dynamic programming is showed to be valid if the adaptive reliable shortest path is defined by optimal-reliable routing policy. A successive approximations algorithm is developed to solve adaptive reliable shortest path problem. Numerical results show that the proposed algorithm is valid using typical transportation networks.


2020 ◽  
Vol 54 (4) ◽  
pp. 1016-1033 ◽  
Author(s):  
Marlin W. Ulmer

An increasing number of e-commerce retailers offers same-day delivery. To deliver the ordered goods, providers dynamically dispatch a fleet of vehicles transporting the goods from the warehouse to the customers. In many cases, retailers offer different delivery deadline options, from four-hour delivery up to next-hour delivery. Due to the deadlines, vehicles often only deliver a few orders per trip. The overall number of served orders within the delivery horizon is small and the revenue low. As a result, many companies currently struggle to conduct same-day delivery cost-efficiently. In this paper, we show how dynamic pricing is able to substantially increase both revenue and the number of customers we are able to serve the same day. To this end, we present an anticipatory pricing and routing policy (APRP) method that incentivizes customers to select delivery deadline options efficiently for the fleet to fulfill. This maintains the fleet’s flexibility to serve more future orders. We model the respective pricing and routing problem as a Markov decision process (MDP). To apply APRP, the state-dependent opportunity costs per customer and option are required. To this end, we use a guided offline value function approximation (VFA) based on state space aggregation. The VFA approximates the opportunity cost for every state and delivery option with respect to the fleet’s flexibility. As an offline method, APRP is able to determine suitable prices instantly when a customer orders. In an extensive computational study, we compare APRP with a policy based on fixed prices and with conventional temporal and geographical pricing policies. APRP outperforms the benchmark policies significantly, leading to both a higher revenue and more customers served the same day.


1998 ◽  
Author(s):  
C. Alaettinoglu ◽  
T. Bates ◽  
E. Gerich ◽  
D. Karrenberg ◽  
D. Meyer ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document