Multiplicative Duality Theory in Production Economics

Author(s):  
Walter Briec ◽  
Paola Ravelojaona
Author(s):  
V.M. BAUTIN ◽  

The paper is dedicated to Grigory M. Loza, the outstanding scientist-agrarian economist,VASKhNIL academician, who made a great contribution to the development of domestic agricultural economics. The author emphasizes his role and outlines his activities carried out in Russian State Agrarian University – Moscow Timiryazev Agricultural Academy and in the VASKHNIL Department of Farm Production Economics and Organization.


ABSTRACT The study was conducted in South Gujarat for examining the production economics of tomato in the study area. Multistage random sampling technique was employed for the selection of 120 tomato farmers from Kaparada, Mandvi and Vyara talukas of Valsad, Surat and Tapi district, respectively. The net income of 1.57 lakh/ha showed the economic viability of the crop in the study area with a high output-input ratio of 3.25. It was suggested that the timely supply of credit and crop insurance scheme could further encourage growers for tomato production.


1989 ◽  
Vol 65 (3) ◽  
pp. 243-264
Author(s):  
ROBERT G. CHAMBERS

2021 ◽  
Vol 36 ◽  
Author(s):  
Sergio Valcarcel Macua ◽  
Ian Davies ◽  
Aleksi Tukiainen ◽  
Enrique Munoz de Cote

Abstract We propose a fully distributed actor-critic architecture, named diffusion-distributed-actor-critic Diff-DAC, with application to multitask reinforcement learning (MRL). During the learning process, agents communicate their value and policy parameters to their neighbours, diffusing the information across a network of agents with no need for a central station. Each agent can only access data from its local task, but aims to learn a common policy that performs well for the whole set of tasks. The architecture is scalable, since the computational and communication cost per agent depends on the number of neighbours rather than the overall number of agents. We derive Diff-DAC from duality theory and provide novel insights into the actor-critic framework, showing that it is actually an instance of the dual-ascent method. We prove almost sure convergence of Diff-DAC to a common policy under general assumptions that hold even for deep neural network approximations. For more restrictive assumptions, we also prove that this common policy is a stationary point of an approximation of the original problem. Numerical results on multitask extensions of common continuous control benchmarks demonstrate that Diff-DAC stabilises learning and has a regularising effect that induces higher performance and better generalisation properties than previous architectures.


Sign in / Sign up

Export Citation Format

Share Document