Performance Measure Properties and the Effects of Incentive Contracts

2006 ◽  
Author(s):  
Jan Bouwens ◽  
Laurence van Lent
2010 ◽  
Vol 85 (6) ◽  
pp. 1921-1949 ◽  
Author(s):  
Merle Ederhof

ABSTRACT: This study examines discretionary bonus payments by firms to senior-level executives. Interpreting discretionary bonuses as the result of implicit incentive contracts, I analyze an analytical model that includes a contractible and a non-contractible performance measure. The model yields the primary hypothesis that discretionary bonuses occur when the outcome of the contractible measure is either low or high, but not when the contractible outcome falls in the medium range. Based on a sample collected from public sources, I find empirical support for the notion that discretionary bonuses are paid based on non-contractible performance measures that are related to future financial performance. Moreover, discretionary bonus payments occur significantly more often when the contractible performance measure falls in the tails of the distribution. In contrast, I do not find support for the predictions that discretionary bonus payments are related to the manipulability of the contractible performance measures or that discretionary bonus payments are related to the power of the executives in the companies.


2006 ◽  
Vol 18 (1) ◽  
pp. 55-75 ◽  
Author(s):  
Jan Bouwens ◽  
Laurence van Lent

Using data from a third-party survey on compensation practices at 151 Dutch firms, we show that less noisy or distorted performance measures and higher cash bonuses are associated with improved employee selection and better-directed effort. Specifically, (1) an increase in the cash bonus increases the perceived selection effects of incentive contracts, but does not independently affect the perceived amount and direction of effort that employees deliver, and (2) performance measure properties directly impact both effort and the selection functioning of incentive contracts. These results hold after controlling for an array of incentive contract design characteristics and for differences in organizational context. Our estimation procedures address several known problems with using secondary datasets.


2011 ◽  
Author(s):  
Yih-teen Lee ◽  
Alfred Stettler ◽  
John Antonakis

2019 ◽  
Author(s):  
Erick Pusck Wilke ◽  
Benny Kramer Costa ◽  
Otávio Bandeira De Lamônica Freire ◽  
Manuel Portugal Ferreira

CFA Digest ◽  
2003 ◽  
Vol 33 (1) ◽  
pp. 51-52
Author(s):  
Frank T. Magiera
Keyword(s):  

2019 ◽  
Author(s):  
Guanglei Cui ◽  
Alan P. Graves ◽  
Eric S. Manas

Relative binding affinity prediction is a critical component in computer aided drug design. Significant amount of effort has been dedicated to developing rapid and reliable in silico methods. However, robust assessment of their performance is still a complicated issue, as it requires a performance measure applicable in the prospective setting and more importantly a true null model that defines the expected performance of random in an objective manner. Although many performance metrics, such as correlation coefficient (r2), mean unsigned error (MUE), and room mean square error (RMSE), are frequently used in the literature, a true and non-trivial null model has yet been identified. To address this problem, here we introduce an interval estimate as an additional measure, namely prediction interval (PI), which can be estimated from the error distribution of the predictions. The benefits of using the interval estimate are 1) it provides the uncertainty range in the predicted activities, which is important in prospective applications; 2) a true null model with well-defined PI can be established. We provide one such example termed Gaussian Random Affinity Model (GRAM), which is based on the empirical observation that the affinity change in a typical lead optimization effort has the tendency to distribute normally N (0, s). Having an analytically defined PI that only depends on the variation in the activities, GRAM should in principle allow us to compare the performance of relative binding affinity prediction methods in a standard way, ultimately critical to measuring the progress made in algorithm development.<br>


2008 ◽  
Author(s):  
Aaron Lowen ◽  
M. Ryan Haley ◽  
Nancy J. Burnett
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document