scholarly journals Optimization and simulation of fixed-time traffic signal control in real-world applications

2019 ◽  
Vol 151 ◽  
pp. 826-833 ◽  
Author(s):  
Theresa Thunig ◽  
Robert Scheffler ◽  
Martin Strehler ◽  
Kai Nagel
2021 ◽  
Vol 22 (2) ◽  
pp. 12-18 ◽  
Author(s):  
Hua Wei ◽  
Guanjie Zheng ◽  
Vikash Gayah ◽  
Zhenhui Li

Traffic signal control is an important and challenging real-world problem that has recently received a large amount of interest from both transportation and computer science communities. In this survey, we focus on investigating the recent advances in using reinforcement learning (RL) techniques to solve the traffic signal control problem. We classify the known approaches based on the RL techniques they use and provide a review of existing models with analysis on their advantages and disadvantages. Moreover, we give an overview of the simulation environments and experimental settings that have been developed to evaluate the traffic signal control methods. Finally, we explore future directions in the area of RLbased traffic signal control methods. We hope this survey could provide insights to researchers dealing with real-world applications in intelligent transportation systems


2020 ◽  
Vol 34 (01) ◽  
pp. 1153-1160 ◽  
Author(s):  
Xinshi Zang ◽  
Huaxiu Yao ◽  
Guanjie Zheng ◽  
Nan Xu ◽  
Kai Xu ◽  
...  

Using reinforcement learning for traffic signal control has attracted increasing interests recently. Various value-based reinforcement learning methods have been proposed to deal with this classical transportation problem and achieved better performances compared with traditional transportation methods. However, current reinforcement learning models rely on tremendous training data and computational resources, which may have bad consequences (e.g., traffic jams or accidents) in the real world. In traffic signal control, some algorithms have been proposed to empower quick learning from scratch, but little attention is paid to learning by transferring and reusing learned experience. In this paper, we propose a novel framework, named as MetaLight, to speed up the learning process in new scenarios by leveraging the knowledge learned from existing scenarios. MetaLight is a value-based meta-reinforcement learning workflow based on the representative gradient-based meta-learning algorithm (MAML), which includes periodically alternate individual-level adaptation and global-level adaptation. Moreover, MetaLight improves the-state-of-the-art reinforcement learning model FRAP in traffic signal control by optimizing its model structure and updating paradigm. The experiments on four real-world datasets show that our proposed MetaLight not only adapts more quickly and stably in new traffic scenarios, but also achieves better performance.


ORiON ◽  
2019 ◽  
Vol 35 (1) ◽  
pp. 57-87
Author(s):  
SJ Movius ◽  
JH Van Vuuren

Fixed-time control and vehicle-actuated control are two distinct types of traffic signal control. The latter control method involves switching traffic signals based on detected traffic flows and thus offers more flexibility (appropriate for lighter traffic conditions) than the former, which relies solely on cyclic, predetermined signal phases that are better suited for heavier traffic conditions. The notion of self-organisation has relatively recently been proposed as an alternative approach towards improving traffic signal control, particularly under light traffic conditions, due to its flexible nature and its potential to result in emergent behaviour. The effectiveness of five existing self-organising traffic signal control strategies from the literature and a fixed-control strategy are compared in this paper within a newly designed agent-based, microscopic traffic simulation model. Various shortcomings of three of these algorithms are identified and algorithmic improvements are suggested to remedy these deficiencies. The relative performance improvements resulting from these algorithmic modifications are then quantified by their implementation in the aforementioned traffic simulation model. Finally, a new self-organising algorithm is proposed that is particularly effective under lighter traffic conditions.


2019 ◽  
Vol 11 (3) ◽  
pp. 168781401982590 ◽  
Author(s):  
Xu Qu ◽  
Tangyi Guo ◽  
Jin Guo ◽  
Yi Lin ◽  
Bin Ran

Fixed-time traffic signal control strategy in an isolated pedestrian crossing tends to reduce traffic capacity and expose vulnerable road users to more danger. To mitigate the negative impact of previous control strategy, this study proposed an optimal real-time signal timing strategy to protect pedestrian crossing and at the same time minimize the system-wide traffic delay. With the application of a wide-area radar data, the features of vehicles, pedestrians, and the passing time of non-motor vehicles and pedestrian were captured considering conflicts and traffic delay. The support vector machine for regression was utilized to hypothesize traffic delay by training. The discrete values of hypothetical passing time will be tested. The minimum value of delay can be recognized and the corresponding hypothetical passing time will be recommended as the green time for crossing. The performance of the proposed ORSTS outperformed the fixed-time traffic signal control strategy in reducing traffic delay by 22.3%.


2021 ◽  
Author(s):  
Maxim Friesen ◽  
Tian Tan ◽  
Jürgen Jasperneite ◽  
Jie Wang

Increasing traffic congestion leads to significant costs associated by additional travel delays, whereby poorly configured signaled intersections are a common bottleneck and root cause. Traditional traffic signal control (TSC) systems employ rule-based or heuristic methods to decide signal timings, while adaptive TSC solutions utilize a traffic-actuated control logic to increase their adaptability to real-time traffic changes. However, such systems are expensive to deploy and are often not flexible enough to adequately adapt to the volatility of today's traffic dynamics. More recently, this problem became a frontier topic in the domain of deep reinforcement learning (DRL) and enabled the development of multi-agent DRL approaches that could operate in environments with several agents present, such as traffic systems with multiple signaled intersections. However, most of these proposed approaches were validated using artificial traffic grids. This paper therefore presents a case study, where real-world traffic data from the town of Lemgo in Germany is used to create a realistic road model within VISSIM. A multi-agent DRL setup, comprising multiple independent deep Q-networks, is applied to the simulated traffic network. Traditional rule-based signal controls, currently employed in the real world at the studied intersections, are integrated in the traffic model with LISA+ and serve as a performance baseline. Our performance evaluation indicates a significant reduction of traffic congestion when using the RL-based signal control policy over the conventional TSC approach in LISA+. Consequently, this paper reinforces the applicability of RL concepts in the domain of TSC engineering by employing a highly realistic traffic model.


2018 ◽  
Vol 45 (8) ◽  
pp. 690-702 ◽  
Author(s):  
Mohammad Aslani ◽  
Stefan Seipel ◽  
Marco Wiering

Traffic signal control can be naturally regarded as a reinforcement learning problem. Unfortunately, it is one of the most difficult classes of reinforcement learning problems owing to its large state space. A straightforward approach to address this challenge is to control traffic signals based on continuous reinforcement learning. Although they have been successful in traffic signal control, they may become unstable and fail to converge to near-optimal solutions. We develop adaptive traffic signal controllers based on continuous residual reinforcement learning (CRL-TSC) that is more stable. The effect of three feature functions is empirically investigated in a microscopic traffic simulation. Furthermore, the effects of departing streets, more actions, and the use of the spatial distribution of the vehicles on the performance of CRL-TSCs are assessed. The results show that the best setup of the CRL-TSC leads to saving average travel time by 15% in comparison to an optimized fixed-time controller.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 744 ◽  
Author(s):  
Song Wang ◽  
Xu Xie ◽  
Kedi Huang ◽  
Junjie Zeng ◽  
Zimin Cai

Reinforcement learning (RL)-based traffic signal control has been proven to have great potential in alleviating traffic congestion. The state definition, which is a key element in RL-based traffic signal control, plays a vital role. However, the data used for state definition in the literature are either coarse or difficult to measure directly using the prevailing detection systems for signal control. This paper proposes a deep reinforcement learning-based traffic signal control method which uses high-resolution event-based data, aiming to achieve cost-effective and efficient adaptive traffic signal control. High-resolution event-based data, which records the time when each vehicle-detector actuation/de-actuation event occurs, is informative and can be collected directly from vehicle-actuated detectors (e.g., inductive loops) with current technologies. Given the event-based data, deep learning techniques are employed to automatically extract useful features for traffic signal control. The proposed method is benchmarked with two commonly used traffic signal control strategies, i.e., the fixed-time control strategy and the actuated control strategy, and experimental results reveal that the proposed method significantly outperforms the commonly used control strategies.


Sign in / Sign up

Export Citation Format

Share Document