scholarly journals Identifying key drivers of wildfires in the contiguous US using machine learning and game theory interpretation

2021 ◽  
Author(s):  
Sally S.‐C. Wang ◽  
Yun Qian ◽  
L. Ruby Leung ◽  
Yang Zhang
Author(s):  
Lorenzo Barberis Canonico ◽  
Christopher Flathmann ◽  
Nathan McNeese

There is an ever-growing literature on the power of prediction markets to harness “the wisdom of the crowd” from large groups of people. However, traditional prediction markets are not designed in a human-centered way, often restricting their own potential. This creates the opportunity to implement a cognitive science perspective on how to enhance the collective intelligence of the participants. Thus, we propose a new model for prediction markets that integrates human factors, cognitive science, game theory and machine learning to maximize collective intelligence. We do this by first identifying the connections between prediction markets and collective intelligence, to then use human factors techniques to analyze our design, culminating in the practical ways with which our design enables artificial intelligence to complement human intelligence.


2019 ◽  
Vol 71 (1) ◽  
pp. 7-34 ◽  
Author(s):  
Atsushi Kajii ◽  
Stephen Morris

AbstractThis paper presents a simple framework that allows us to survey and relate some different strands of the game theory literature. We describe a “canonical” way of adding incomplete information to a complete information game. This framework allows us to give a simple “complete theory” interpretation (Kreps in Game theory and economic modelling. Clarendon Press, Oxford, 1990) of standard normal form refinements such as perfection, and to relate refinements both to the “higher-order beliefs literature” (Rubinstein in Am Econ Rev 79:385–391, 1989; Monderer and Samet in Games Econ Behav 1:170–190, 1989; Morris et al. in Econ J Econ Soc 63:145–157, 1995; Kajii and Morris in Econ J Econ Soc 65:1283–1309, 1997a) and the “payoff uncertainty approach” (Fudenberg et al. in J Econ Theory 44:354–380, 1988; Dekel and Fudenberg in J Econ Theory 52:243–267, 1990).


Spine ◽  
2020 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Michael L. Martini ◽  
Sean N. Neifert ◽  
Eric K. Oermann ◽  
Jeffrey T. Gilligan ◽  
Robert J. Rothrock ◽  
...  

2011 ◽  
Vol 45 (1) ◽  
pp. 41-56 ◽  
Author(s):  
Dimiter Ialnazov ◽  
Nikolay Nenovsky

2008 ◽  
Vol 33 ◽  
pp. 259-283 ◽  
Author(s):  
I. Rezek ◽  
D. S. Leslie ◽  
S. Reece ◽  
S. J. Roberts ◽  
A. Rogers ◽  
...  

In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution).


Sign in / Sign up

Export Citation Format

Share Document