The Sample Complexity of Up-to-ε Multi-dimensional Revenue Maximization

2021 ◽  
Vol 68 (3) ◽  
pp. 1-28
Author(s):  
Yannai A. Gonczarowski ◽  
S. Matthew Weinberg

We consider the sample complexity of revenue maximization for multiple bidders in unrestricted multi-dimensional settings. Specifically, we study the standard model of additive bidders whose values for heterogeneous items are drawn independently. For any such instance and any , we show that it is possible to learn an -Bayesian Incentive Compatible auction whose expected revenue is within of the optimal -BIC auction from only polynomially many samples. Our fully nonparametric approach is based on ideas that hold quite generally and completely sidestep the difficulty of characterizing optimal (or near-optimal) auctions for these settings. Therefore, our results easily extend to general multi-dimensional settings, including valuations that are not necessarily even subadditive , and arbitrary allocation constraints. For the cases of a single bidder and many goods, or a single parameter (good) and many bidders, our analysis yields exact incentive compatibility (and for the latter also computational efficiency). Although the single-parameter case is already well understood, our corollary for this case extends slightly the state of the art.

Author(s):  
Prabir Bhattacharya ◽  
Minzhe Guo

Content delivery is a key technology on the Internet to achieve large scale, low-latency, reliable, and intelligent data delivery. Replica placement (RP) is a key machinery in content delivery systems to achieve efficient and effective content delivery. This work proposes a novel decentralized algorithm for the replica placement in peer-assisted content delivery networks with simultaneous considerations for peer incentives. By applying techniques from the algorithmic mechanism design theory, the authors show the incentive compatibility of the proposed algorithm. Experiments were conducted to validate the properties of the proposed method and comparisons were made with the state-of-the-art RP algorithms.


2020 ◽  
Vol 17 (2) ◽  
pp. 62-70
Author(s):  
Chenghao Guo ◽  
Zhiyi Huang ◽  
Xinzhi Zhang

Author(s):  
Céline Hocquette ◽  
Stephen H. Muggleton

Predicate Invention in Meta-Interpretive Learning (MIL) is generally based on a top-down approach, and the search for a consistent hypothesis is carried out starting from the positive examples as goals. We consider augmenting top-down MIL systems with a bottom-up step during which the background knowledge is generalised with an extension of the immediate consequence operator for second-order logic programs. This new method provides a way to perform extensive predicate invention useful for feature discovery. We demonstrate this method is complete with respect to a fragment of dyadic datalog. We theoretically prove this method reduces the number of clauses to be learned for the top-down learner, which in turn can reduce the sample complexity. We formalise an equivalence relation for predicates which is used to eliminate redundant predicates. Our experimental results suggest pairing the state-of-the-art MIL system Metagol with an initial bottom-up step can significantly improve learning performance.


2014 ◽  
Vol 35 ◽  
pp. 1460390
Author(s):  
SIMEONE DUSSONI

The MEG experiment started taking data in 2009 looking for the Standard Model suppressed decay μ → e + γ, which, if observed, can reveal Beyond Standard Model physics. It makes use of state-of-the art detectors optimized for operating in conditions of very high intensity, rejecting as much background as possible. The data taking ended August 2013 and an upgrade R&D is started to push the experimental sensitivity. The present upper limit on the decay Branching Ratio (BR) is presented, obtained with the subset of data from 2009 to 2011 run, together with a description of the key features of the upgraded detector.


2020 ◽  
Vol 34 (10) ◽  
pp. 13905-13906
Author(s):  
Rohan Saphal ◽  
Balaraman Ravindran ◽  
Dheevatsa Mudigere ◽  
Sasikanth Avancha ◽  
Bharat Kaul

Reinforcement learning algorithms are sensitive to hyper-parameters and require tuning and tweaking for specific environments for improving performance. Ensembles of reinforcement learning models on the other hand are known to be much more robust and stable. However, training multiple models independently on an environment suffers from high sample complexity. We present here a methodology to create multiple models from a single training instance that can be used in an ensemble through directed perturbation of the model parameters at regular intervals. This allows training a single model that converges to several local minima during the optimization process as a result of the perturbation. By saving the model parameters at each such instance, we obtain multiple policies during training that are ensembled during evaluation. We evaluate our approach on challenging discrete and continuous control tasks and also discuss various ensembling strategies. Our framework is substantially sample efficient, computationally inexpensive and is seen to outperform state of the art (SOTA) approaches


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Hui Zeng ◽  
Xiuqing Wang ◽  
Yu Gu

This paper presents an effective local image region description method, called CS-LMP (Center Symmetric Local Multilevel Pattern) descriptor, and its application in image matching. The CS-LMP operator has no exponential computations, so the CS-LMP descriptor can encode the differences of the local intensity values using multiply quantization levels without increasing the dimension of the descriptor. Compared with the binary/ternary pattern based descriptors, the CS-LMP descriptor has better descriptive ability and computational efficiency. Extensive image matching experimental results testified the effectiveness of the proposed CS-LMP descriptor compared with other existing state-of-the-art descriptors.


Synthese ◽  
2021 ◽  
Author(s):  
Philippe van Basshuysen

AbstractAgainst the orthodox view of the Nash equilibrium as “the embodiment of the idea that economic agents are rational” (Aumann, 1985, p 43), some theorists have proposed ‘non-classical’ concepts of rationality in games, arguing that rational agents should be capable of improving upon inefficient equilibrium outcomes. This paper considers some implications of these proposals for economic theory, by focusing on institutional design. I argue that revisionist concepts of rationality conflict with the constraint that institutions should be designed to be incentive-compatible, that is, that they should implement social goals in equilibrium. To resolve this conflict, proponents of revisionist concepts face a choice between three options: (1) reject incentive compatibility as a general constraint, (2) deny that individuals interacting through the designed institutions are rational, or (3) accept that their concepts do not cover institutional design. I critically discuss these options and I argue that a more inclusive concept of rationality, e.g. the one provided by Robert Sugden’s version of team reasoning, holds the most promise for the non-classical project, yielding a novel argument for incentive compatibility as a general constraint.


2020 ◽  
Vol 35 (33) ◽  
pp. 2043005
Author(s):  
Fernanda Psihas ◽  
Micah Groh ◽  
Christopher Tunnell ◽  
Karl Warburton

Neutrino experiments study the least understood of the Standard Model particles by observing their direct interactions with matter or searching for ultra-rare signals. The study of neutrinos typically requires overcoming large backgrounds, elusive signals, and small statistics. The introduction of state-of-the-art machine learning tools to solve analysis tasks has made major impacts to these challenges in neutrino experiments across the board. Machine learning algorithms have become an integral tool of neutrino physics, and their development is of great importance to the capabilities of next generation experiments. An understanding of the roadblocks, both human and computational, and the challenges that still exist in the application of these techniques is critical to their proper and beneficial utilization for physics applications. This review presents the current status of machine learning applications for neutrino physics in terms of the challenges and opportunities that are at the intersection between these two fields.


Sign in / Sign up

Export Citation Format

Share Document