scholarly journals Open Source Cross-Sectional Asset Pricing

Author(s):  
Andrew Y. Chen ◽  
Tom Zimmermann
2021 ◽  
Vol 2021 (037) ◽  
pp. 1-66
Author(s):  
Andrew Y. Chen ◽  
◽  
Tom Zimmermann ◽  

We provide data and code that successfully reproduces nearly all crosssectional stock return predictors. Our 319 characteristics draw from previous meta-studies, but we differ by comparing our t-stats to the original papers' results. For the 161 characteristics that were clearly significant in the original papers, 98% of our long-short portfolios find t-stats above 1.96. For the 44 characteristics that had mixed evidence, our reproductions find t-stats of 2 on average. A regression of reproduced t-stats on original longshort t-stats finds a slope of 0.90 and an R2 of 83%. Mean returns aremonotonic in predictive signals at the characteristic level. The remaining 114 characteristics were insignificant in the original papers or are modifications of the originals created byHou, Xue, and Zhang (2020). These remaining characteristics are almost always significant if the original characteristic was also significant.


Author(s):  
Ying Tay Lee ◽  
Devinaga Rasiah ◽  
Ming Ming Lai

Human rights and fundamental freedoms such as economic, political, and press freedoms vary widely from country to country. It creates opportunity and risk in investment decisions. Thus, this study is carried out to examine if the explanatory power of the model for capital asset pricing could be improved when these human rights movement indices are included in the model. The sample for this study comprises of 495 stocks listed in Bursa Malaysia, covering the sampling period from 2003 to 2013. The model applied in this study employed the pooled ordinary least square regression estimation. In addition, the robustness of the model is tested by using firm size as a controlled variable. The findings show that market beta as well as the economic and press freedom indices could explain the cross-sectional stock returns of the Malaysian stock market. By controlling the firm size, it adds marginally to the explanation of the extended CAP model which incorporated economic, political, and press freedom indices.


2021 ◽  
Vol 11 ◽  
Author(s):  
Lucas M. Ritschl ◽  
Paul Kilbertus ◽  
Florian D. Grill ◽  
Matthias Schwarz ◽  
Jochen Weitz ◽  
...  

BackgroundMandibular reconstruction is conventionally performed freehand, CAD/CAM-assisted, or by using partially adjustable resection aids. CAD/CAM-assisted reconstructions are usually done in cooperation with osteosynthesis manufacturers, which entails additional costs and longer lead time. The purpose of this study is to analyze an in-house, open-source software-based solution for virtual planning.Methods and MaterialsAll consecutive cases between January 2019 and April 2021 that underwent in-house, software-based (Blender) mandibular reconstruction with a free fibula flap (FFF) were included in this cross-sectional study. The pre- and postoperative Digital Imaging and Com munications in Medicine (DICOM) data were converted to standard tessellation language (STL) files. In addition to documenting general information (sex, age, indication for surgery, extent of resection, number of segments, duration of surgery, and ischemia time), conventional measurements and three-dimensional analysis methods (root mean square error [RMSE], mean surface distance [MSD], and Hausdorff distance [HD]) were used.ResultsTwenty consecutive cases were enrolled. Three-dimensional analysis of preoperative and virtually planned neomandibula models was associated with a median RMSE of 1.4 (0.4–7.2), MSD of 0.3 (-0.1–2.9), and HD of 0.7 (0.1–3.1). Three-dimensional comparison of preoperative and postoperative models showed a median RMSE of 2.2 (1.5–11.1), MSD of 0.5 (-0.6–6.1), and HD of 1.5 (1.1–6.5) and the differences were significantly different for RMSE (p < 0.001) and HD (p < 0.001). The difference was not significantly different for MSD (p = 0.554). Three-dimensional analysis of virtual and postoperative models had a median RMSE of 2.3 (1.3–10.7), MSD of -0.1 (-1.0–5.6), and HD of 1.7 (0.1–5.9).ConclusionsOpen-source software-based in-house planning is a feasible, inexpensive, and fast method that enables accurate reconstructions. Additionally, it is excellent for teaching purposes.


2010 ◽  
Vol 2 (1) ◽  
pp. 49-74 ◽  
Author(s):  
Ravi Jagannathan ◽  
Ernst Schaumburg ◽  
Guofu Zhou

2019 ◽  
Vol 55 (3) ◽  
pp. 709-750 ◽  
Author(s):  
Andrew Ang ◽  
Jun Liu ◽  
Krista Schwarz

We examine the efficiency of using individual stocks or portfolios as base assets to test asset pricing models using cross-sectional data. The literature has argued that creating portfolios reduces idiosyncratic volatility and allows more precise estimates of factor loadings, and consequently risk premia. We show analytically and empirically that smaller standard errors of portfolio beta estimates do not lead to smaller standard errors of cross-sectional coefficient estimates. Factor risk premia standard errors are determined by the cross-sectional distributions of factor loadings and residual risk. Portfolios destroy information by shrinking the dispersion of betas, leading to larger standard errors.


Entropy ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. 721
Author(s):  
Javier Rojo-Suárez ◽  
Ana Belén Alonso-Conde

Recent literature shows that many testing procedures used to evaluate asset pricing models result in spurious rejection probabilities. Model misspecification, the strong factor structure of test assets, or skewed test statistics largely explain this. In this paper we use the relative entropy of pricing kernels to provide an alternative framework for testing asset pricing models. Building on the fact that the law of one price guarantees the existence of a valid pricing kernel, we study the relationship between the mean-variance efficiency of a model’s factor-mimicking portfolio, as measured by the cross-sectional generalized least squares (GLS) R 2 statistic, and the relative entropy of the pricing kernel, as determined by the Kullback–Leibler divergence. In this regard, we suggest an entropy-based decomposition that accurately captures the divergence between the factor-mimicking portfolio and the minimum-variance pricing kernel resulting from the Hansen-Jagannathan bound. Our results show that, although GLS R 2 statistics and relative entropy are strongly correlated, the relative entropy approach allows us to explicitly decompose the explanatory power of the model into two components, namely, the relative entropy of the pricing kernel and that corresponding to its correlation with asset returns. This makes the relative entropy a versatile tool for designing robust tests in asset pricing.


2019 ◽  
Vol 22 (02) ◽  
pp. 1950012
Author(s):  
Thomas Gramespacher ◽  
Armin Bänziger

In two-pass regression-tests of asset-pricing models, cross-sectional correlations in the errors of the first-pass time-series regression lead to correlated measurement errors in the betas used as explanatory variables in the second-pass cross-sectional regression. The slope estimator of the second-pass regression is an estimate for the factor risk-premium and its significance is decisive for the validity of the pricing model. While it is well known that the slope estimator is downward biased in presence of uncorrelated measurement errors, we show in this paper that the correlations seen in empirical return data substantially suppress this bias. For the case of a single-factor model, we calculate the bias of the OLS slope estimator in the presence of correlated measurement errors with a first-order Taylor-approximation in the size of the errors. We show that the bias increases with the size of the errors, but decreases the more the errors are correlated. We illustrate and validate our result using a simulation approach based on empirical data commonly used in asset-pricing tests.


Sign in / Sign up

Export Citation Format

Share Document