scholarly journals Changes in the Threshold Uncertainty in a Simultaneous Subscription Game

2014 ◽  
Vol 04 (04) ◽  
pp. 263-269
Author(s):  
Timothy J. Gronberg ◽  
Hui-Chun Peng
2010 ◽  
Vol 94 (11-12) ◽  
pp. 848-861 ◽  
Author(s):  
Stefano Barbieri ◽  
David A. Malueg

2020 ◽  
Vol 48 (6) ◽  
pp. 751-777
Author(s):  
Abdul H. Kidwai ◽  
Angela C. M. de Oliveira

Threshold common-pool resources (TCPRs), such as fisheries or groundwater reserves, face irreversible damage if harvesting exceeds a sustainability threshold. Uncertainty about the threshold for sustainable use or the number of resource users can exacerbate the overharvesting problem. Policy makers may therefore seek to reduce threshold or group size uncertainty in TCPRs. Overall, we find that reducing threshold and group size uncertainty (moving from high to low uncertainty) increases expected earnings from the resource. However, complete elimination of group size uncertainty reduces expected earnings. Furthermore, the impact of group size uncertainty on earnings varies by the level of threshold uncertainty. Moving from high to low group size uncertainty increases earnings at low levels of threshold uncertainty but not at high levels of threshold uncertainty. Taken together, we find that reducing threshold uncertainty is beneficial while tackling group size uncertainty requires a more nuanced approach, highlighting the importance of a joint analysis.


Author(s):  
Alexander Rader ◽  
Ionela G Mocanu ◽  
Vaishak Belle ◽  
Brendan Juba

Robust learning in expressive languages with real-world data continues to be a challenging task. Numerous conventional methods appeal to heuristics without any assurances of robustness. While probably approximately correct (PAC) Semantics offers strong guarantees, learning explicit representations is not tractable, even in propositional logic. However, recent work on so-called “implicit" learning has shown tremendous promise in terms of obtaining polynomial-time results for fragments of first-order logic. In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework keeps the existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice.


Author(s):  
Astrid Dannenberg ◽  
Andreas Löschel ◽  
Gabriele Paolacci ◽  
Christiane Reif ◽  
Alessandro Tavoni

Extremes ◽  
2006 ◽  
Vol 9 (2) ◽  
pp. 87-106 ◽  
Author(s):  
Andrea Tancredi ◽  
Clive Anderson ◽  
Anthony O’Hagan

Sign in / Sign up

Export Citation Format

Share Document