Distributed constraint optimization, resource allocation and scheduling in large scale agent networks

2011 ◽  
Author(s):  
Panagiotis Karagiannis
Author(s):  
Yanchen Deng ◽  
Runsheng Yu ◽  
Xinrun Wang ◽  
Bo An

Distributed constraint optimization problems (DCOPs) are a powerful model for multi-agent coordination and optimization, where information and controls are distributed among multiple agents by nature. Sampling-based algorithms are important incomplete techniques for solving medium-scale DCOPs. However, they use tables to exactly store all the information (e.g., costs, confidence bounds) to facilitate sampling, which limits their scalability. This paper tackles the limitation by incorporating deep neural networks in solving DCOPs for the first time and presents a neural-based sampling scheme built upon regret-matching. In the algorithm, each agent trains a neural network to approximate the regret related to its local problem and performs sampling according to the estimated regret. Furthermore, to ensure exploration we propose a regret rounding scheme that rounds small regret values to positive numbers. We theoretically show the regret bound of our algorithm and extensive evaluations indicate that our algorithm can scale up to large-scale DCOPs and significantly outperform the state-of-the-art methods.


2014 ◽  
Vol 2014 ◽  
pp. 1-9
Author(s):  
Duan Peibo ◽  
Zhang Changsheng ◽  
Zhang Bin

This paper presents a new distributed constraint optimization algorithm called LSPA, which can be used to solve large scale distributed constraint optimization problem (DCOP). Different from the access of local information in the existing algorithms, a new criterion called local stability is defined and used to evaluate which is the next agent whose value needs to be changed. The propose of local stability opens a new research direction of refining initial solution by finding key agents which can seriously effect global solution once they modify assignments. In addition, the construction of initial solution could be received more quickly without repeated assignment and conflict. In order to execute parallel search, LSPA finds final solution by constantly computing local stability of compatible agents. Experimental evaluation shows that LSPA outperforms some of the state-of-the-art incomplete distributed constraint optimization algorithms, guaranteeing better solutions received within ideal time.


Author(s):  
Fukui Li ◽  
Jingyuan He ◽  
Mingliang Zhou ◽  
Bin Fang

Local search algorithms are widely applied in solving large-scale distributed constraint optimization problem (DCOP). Distributed stochastic algorithm (DSA) is a typical local search algorithm to solve DCOP. However, DSA has some drawbacks including easily falling into local optima and the unfairness of assignment choice. This paper presents a novel local search algorithm named VLSs to solve the issues. In VLSs, sampling according to the probability corresponding to assignment is introduced to enable each agent to choose other promising values. Besides, each agent alternately performs a greedy choice among multiple parallel solutions to reduce the chance of falling into local optima and a variance adjustment mechanism to guide the search into a relatively good initial solution in a periodic manner. We give the proof of variance adjustment mechanism rationality and theoretical explanation of impact of greed among multiple parallel solutions. The experimental results show the superiority of VLSs over state-of-the-art DCOP algorithms.


2020 ◽  
Vol 53 (2) ◽  
pp. 2634-2641
Author(s):  
Vinicius Lima ◽  
Mark Eisen ◽  
Konstatinos Gatsis ◽  
Alejandro Ribeiro

Sign in / Sign up

Export Citation Format

Share Document