scholarly journals On Similarities between Inference in Game Theory and Machine Learning

2008 ◽  
Vol 33 ◽  
pp. 259-283 ◽  
Author(s):  
I. Rezek ◽  
D. S. Leslie ◽  
S. Reece ◽  
S. J. Roberts ◽  
A. Rogers ◽  
...  

In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution).

2019 ◽  
Author(s):  
Nalin Leelatian ◽  
Justine Sinnaeve ◽  
Akshitkumar M. Mistry ◽  
Sierra M. Barone ◽  
Kirsten E. Diggins ◽  
...  

AbstractRecent developments in machine learning implemented dimensionality reduction and clustering tools to classify the cellular composition of patient-derived tissue in multi-dimensional, single cell studies. Current approaches, however, require prior knowledge of either categorical clinical outcomes or cell type identities. These algorithms are not well suited for application in tumor biology, where clinical outcomes can be continuous and censored and cell identities may be novel and plastic. Risk Assessment Population IDentification (RAPID) is an unsupervised, machine learning algorithm that identifies single cell phenotypes and assesses clinical risk stratification as a continuous variable. Single cell mass cytometry evaluated 34 different phospho-proteins, transcription factors, and cell identity proteins in tumor tissue resected from patients bearingIDHwild-type glioblastomas. RAPID identified and characterized multiple biologically distinct tumor cell subsets that independently and continuously stratified patient outcome. RAPID is broadly applicable for single cell studies where atypical cancer and immune cells may drive disease biology and treatment responses.


2021 ◽  
Vol 2021 (4) ◽  
Author(s):  
Christophe Grojean ◽  
Ayan Paul ◽  
Zhuoni Qian

Abstract The associated production of a $$ b\overline{b} $$ b b ¯ pair with a Higgs boson could provide an important probe to both the size and the phase of the bottom-quark Yukawa coupling, yb. However, the signal is shrouded by several background processes including the irreducible Zh, Z →$$ b\overline{b} $$ b b ¯ background. We show that the analysis of kinematic shapes provides us with a concrete prescription for separating the yb-sensitive production modes from both the irreducible and the QCD-QED backgrounds using the $$ b\overline{b}\gamma \gamma $$ b b ¯ γγ final state. We draw a page from game theory and use Shapley values to make Boosted Decision Trees interpretable in terms of kinematic measurables and provide physics insights into the variances in the kinematic shapes of the different channels that help us complete this feat. Adding interpretability to the machine learning algorithm opens up the black-box and allows us to cherry-pick only those kinematic variables that matter most in the analysis. We resurrect the hope of constraining the size and, possibly, the phase of yb using kinematic shape studies of $$ b\overline{b}h $$ b b ¯ h production with the full HL-LHC data and at FCC-hh.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1138
Author(s):  
Mattia Miotto ◽  
Lorenzo Monacelli

We present ToloMEo (TOpoLogical netwOrk Maximum Entropy Optimization), a program implemented in C and Python that exploits a maximum entropy algorithm to evaluate network topological information. ToloMEo can study any system defined on a connected network where nodes can assume N discrete values by approximating the system probability distribution with a Pottz Hamiltonian on a graph. The software computes entropy through a thermodynamic integration from the mean-field solution to the final distribution. The nature of the algorithm guarantees that the evaluated entropy is variational (i.e., it always provides an upper bound to the exact entropy). The program also performs machine learning, inferring the system’s behavior providing the probability of unknown states of the network. These features make our method very general and applicable to a broad class of problems. Here, we focus on three different cases of study: (i) an agent-based model of a minimal ecosystem defined on a square lattice, where we show how topological entropy captures a crossover between hunting behaviors; (ii) an example of image processing, where starting from discretized pictures of cell populations we extract information about the ordering and interactions between cell types and reconstruct the most likely positions of cells when data are missing; and (iii) an application to recurrent neural networks, in which we measure the information stored in different realizations of the Hopfield model, extending our method to describe dynamical out-of-equilibrium processes.


2021 ◽  
pp. 165-275
Author(s):  
Kazuyuki Tanaka

AbstractWe review sublinear modeling in probabilistic graphical models by statistical mechanical informatics and statistical machine learning theory. Our statistical mechanical informatics schemes are based on advanced mean-field methods including loopy belief propagations. This chapter explores how phase transitions appear in loopy belief propagations for prior probabilistic graphical models. The frameworks are mainly explained for loopy belief propagations in the Ising model which is one of the elementary versions of probabilistic graphical models. We also expand the schemes to quantum statistical machine learning theory. Our framework can provide us with sublinear modeling based on the momentum space renormalization group methods.


2018 ◽  
Author(s):  
C.H.B. van Niftrik ◽  
F. van der Wouden ◽  
V. Staartjes ◽  
J. Fierstra ◽  
M. Stienen ◽  
...  

2020 ◽  
pp. 1-12
Author(s):  
Li Dongmei

English text-to-speech conversion is the key content of modern computer technology research. Its difficulty is that there are large errors in the conversion process of text-to-speech feature recognition, and it is difficult to apply the English text-to-speech conversion algorithm to the system. In order to improve the efficiency of the English text-to-speech conversion, based on the machine learning algorithm, after the original voice waveform is labeled with the pitch, this article modifies the rhythm through PSOLA, and uses the C4.5 algorithm to train a decision tree for judging pronunciation of polyphones. In order to evaluate the performance of pronunciation discrimination method based on part-of-speech rules and HMM-based prosody hierarchy prediction in speech synthesis systems, this study constructed a system model. In addition, the waveform stitching method and PSOLA are used to synthesize the sound. For words whose main stress cannot be discriminated by morphological structure, label learning can be done by machine learning methods. Finally, this study evaluates and analyzes the performance of the algorithm through control experiments. The results show that the algorithm proposed in this paper has good performance and has a certain practical effect.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


Author(s):  
Kunal Parikh ◽  
Tanvi Makadia ◽  
Harshil Patel

Dengue is unquestionably one of the biggest health concerns in India and for many other developing countries. Unfortunately, many people have lost their lives because of it. Every year, approximately 390 million dengue infections occur around the world among which 500,000 people are seriously infected and 25,000 people have died annually. Many factors could cause dengue such as temperature, humidity, precipitation, inadequate public health, and many others. In this paper, we are proposing a method to perform predictive analytics on dengue’s dataset using KNN: a machine-learning algorithm. This analysis would help in the prediction of future cases and we could save the lives of many.


Sign in / Sign up

Export Citation Format

Share Document