scholarly journals WHATEVER YOU WANT: INCONSISTENT RESULTS IS THE RULE, NOT THE EXCEPTION, IN THE STUDY OF PRIMATE BRAIN EVOLUTION

2018 ◽  
Author(s):  
Andreas Wartel ◽  
Patrik Lindenfors ◽  
Johan Lind

AbstractPrimate brains differ in size and architecture. Hypotheses to explain this variation are numerous and many tests have been carried out. However, after body size has been accounted for there is little left to explain. The proposed explanatory variables for the residual variation are many and covary, both with each other and with body size. Further, the data sets used in analyses have been small, especially in light of the many proposed predictors. Here we report the complete list of models that results from exhaustively combining six commonly used predictors of brain and neocortex size. This provides an overview of how the output from standard statistical analyses changes when the inclusion of different predictors is altered. By using both the most commonly tested brain data set and a new, larger data set, we show that the choice of included variables fundamentally changes the conclusions as to what drives primate brain evolution. Our analyses thus reveal why studies have had troubles replicating earlier results and instead have come to such different conclusions. Although our results are somewhat disheartening, they highlight the importance of scientific rigor when trying to answer difficult questions. It is our position that there is currently no empirical justification to highlight any particular hypotheses, of those adaptive hypotheses we have examined here, as the main determinant of primate brain evolution.

2012 ◽  
Vol 12 (5) ◽  
pp. 12357-12389
Author(s):  
F. Hendrick ◽  
E. Mahieu ◽  
G. E. Bodeker ◽  
K. F. Boersma ◽  
M. P. Chipperfield ◽  
...  

Abstract. The trend in stratospheric NO2 column at the NDACC (Network for the Detection of Atmospheric Composition Change) station of Jungfraujoch (46.5° N, 8.0° E) is assessed using ground-based FTIR and zenith-scattered visible sunlight SAOZ measurements over the period 1990 to 2009 as well as a composite satellite nadir data set constructed from ERS-2/GOME, ENVISAT/SCIAMACHY, and METOP-A/GOME-2 observations over the 1996–2009 period. To calculate the trends, a linear least squares regression model including explanatory variables for a linear trend, the mean annual cycle, the quasi-biennial oscillation (QBO), solar activity, and stratospheric aerosol loading is used. For the 1990–2009 period, statistically indistinguishable trends of −3.7 ± 1.1%/decade and −3.6 ± 0.9%/decade are derived for the SAOZ and FTIR NO2 column time series, respectively. SAOZ, FTIR, and satellite nadir data sets show a similar decrease over the 1996–2009 period, with trends of −2.4 ± 1.1%/decade, −4.3 ± 1.4%/decade, and −3.6 ± 2.2%/decade, respectively. The fact that these declines are opposite in sign to the globally observed +2.5%/decade trend in N2O, suggests that factors other than N2O are driving the evolution of stratospheric NO2 at northern mid-latitudes. Possible causes of the decrease in stratospheric NO2 columns have been investigated. The most likely cause is a change in the NO2/NO partitioning in favor of NO, due to a possible stratospheric cooling and a decrease in stratospheric chlorine content, the latter being further confirmed by the negative trend in the ClONO2 column derived from FTIR observations at Jungfraujoch. Decreasing ClO concentrations slows the NO + ClO → NO2 + Cl reaction and a stratospheric cooling slows the NO + O3 → NO2 + O2 reaction, leaving more NOx in the form of NO. The slightly positive trends in ozone estimated from ground- and satellite-based data sets are also consistent with the decrease of NO2 through the NO2 + O3 → NO3 + O2 reaction. Finally, we cannot rule out the possibility that a strengthening of the Dobson-Brewer circulation, which reduces the time available for N2O photolysis in the stratosphere, could also contribute to the observed decline in stratospheric NO2 above Jungfraujoch.


2012 ◽  
Vol 12 (18) ◽  
pp. 8851-8864 ◽  
Author(s):  
F. Hendrick ◽  
E. Mahieu ◽  
G. E. Bodeker ◽  
K. F. Boersma ◽  
M. P. Chipperfield ◽  
...  

Abstract. The trend in stratospheric NO2 column at the NDACC (Network for the Detection of Atmospheric Composition Change) station of Jungfraujoch (46.5° N, 8.0° E) is assessed using ground-based FTIR and zenith-scattered visible sunlight SAOZ measurements over the period 1990 to 2009 as well as a composite satellite nadir data set constructed from ERS-2/GOME, ENVISAT/SCIAMACHY, and METOP-A/GOME-2 observations over the 1996–2009 period. To calculate the trends, a linear least squares regression model including explanatory variables for a linear trend, the mean annual cycle, the quasi-biennial oscillation (QBO), solar activity, and stratospheric aerosol loading is used. For the 1990–2009 period, statistically indistinguishable trends of −3.7 ± 1.1% decade−1 and −3.6 ± 0.9% decade−1 are derived for the SAOZ and FTIR NO2 column time series, respectively. SAOZ, FTIR, and satellite nadir data sets show a similar decrease over the 1996–2009 period, with trends of −2.4 ± 1.1% decade−1, −4.3 ± 1.4% decade−1, and −3.6 ± 2.2% decade−1, respectively. The fact that these declines are opposite in sign to the globally observed +2.5% decade−1 trend in N2O, suggests that factors other than N2O are driving the evolution of stratospheric NO2 at northern mid-latitudes. Possible causes of the decrease in stratospheric NO2 columns have been investigated. The most likely cause is a change in the NO2/NO partitioning in favor of NO, due to a possible stratospheric cooling and a decrease in stratospheric chlorine content, the latter being further confirmed by the negative trend in the ClONO2 column derived from FTIR observations at Jungfraujoch. Decreasing ClO concentrations slows the NO + ClO → NO2 + Cl reaction and a stratospheric cooling slows the NO + O3 → NO2 + O2 reaction, leaving more NOx in the form of NO. The slightly positive trends in ozone estimated from ground- and satellite-based data sets are also consistent with the decrease of NO2 through the NO2 + O3 → NO3 + O2 reaction. Finally, we cannot rule out the possibility that a strengthening of the Dobson-Brewer circulation, which reduces the time available for N2O photolysis in the stratosphere, could also contribute to the observed decline in stratospheric NO2 above Jungfraujoch.


Forecasting ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 138-165
Author(s):  
Jennifer L. Castle ◽  
Jurgen A. Doornik ◽  
David F. Hendry

Economic forecasting is difficult, largely because of the many sources of nonstationarity influencing observational time series. Forecasting competitions aim to improve the practice of economic forecasting by providing very large data sets on which the efficacy of forecasting methods can be evaluated. We consider the general principles that seem to be the foundation for successful forecasting, and show how these are relevant for methods that did well in the M4 competition. We establish some general properties of the M4 data set, which we use to improve the basic benchmark methods, as well as the Card method that we created for our submission to that competition. A data generation process is proposed that captures the salient features of the annual data in M4.


2007 ◽  
Vol 362 (1480) ◽  
pp. 649-658 ◽  
Author(s):  
R.I.M Dunbar ◽  
Susanne Shultz

We present a detailed reanalysis of the comparative brain data for primates, and develop a model using path analysis that seeks to present the coevolution of primate brain (neocortex) and sociality within a broader ecological and life-history framework. We show that body size, basal metabolic rate and life history act as constraints on brain evolution and through this influence the coevolution of neocortex size and group size. However, they do not determine either of these variables, which appear to be locked in a tight coevolutionary system. We show that, within primates, this relationship is specific to the neocortex. Nonetheless, there are important constraints on brain evolution; we use path analysis to show that, in order to evolve a large neocortex, a species must first evolve a large brain to support that neocortex and this in turn requires adjustments in diet (to provide the energy needed) and life history (to allow sufficient time both for brain growth and for ‘software’ programming). We review a wider literature demonstrating a tight coevolutionary relationship between brain size and sociality in a range of mammalian taxa, but emphasize that the social brain hypothesis is not about the relationship between brain/neocortex size and group size per se ; rather, it is about social complexity and we adduce evidence to support this. Finally, we consider the wider issue of how mammalian (and primate) brains evolve in order to localize the social effects.


Author(s):  
Gidon Eshel

Chapter 11 discussed one of the many methods available for simultaneously analyzing more than one data set. While powerful and useful (especially for unveiling favored state evolution pathways), the extended empirical orthogonal function (EEOF) procedure has some important limitations. Notably, because the state dimensions rapidly expand as state vectors are appended end-to-end, EEOF analysis may not always be numerically tractable. For analyzing two data sets, taking note of their cross-covariance but not explicitly of individual sets’ covariance, the singular value decomposition (SVD) method is the most natural. This chapter discusses SVD analysis of two fields. SVD analysis can be thought of as a generalization of EOF analysis to two data sets that are believed to be related.


Author(s):  
DAVE WIGHTMAN ◽  
TONY BENDELL

In an Industrial Reliability setting a number of modeling techniques are available which allow the incorporation of explanatory variables; for example, Proportional Hazards Modeling, Proportional Intensity Modeling and Additive Hazards Modeling. However, in many applied settings it is unclear what the form of the underlying process is, and thus which of the above modeling structures is the most appropriate, if any. In this paper we discuss the different modeling formulations with regard to such features as their appropriateness, flexibility, robustness and ease of implementation together with the author’s experience gained from application of the models to a wide selection of reliability data sets. In particular, a comparative study of the models when applied to a software reliability data set is provided.


2021 ◽  
pp. 364-378
Author(s):  
Sameer Sundrani ◽  
James Lu

PURPOSE The application of Cox proportional hazards (CoxPH) models to survival data and the derivation of hazard ratio (HR) are well established. Although nonlinear, tree-based machine learning (ML) models have been developed and applied to the survival analysis, no methodology exists for computing HRs associated with explanatory variables from such models. We describe a novel way to compute HRs from tree-based ML models using the SHapley Additive exPlanation values, which is a locally accurate and consistent methodology to quantify explanatory variables’ contribution to predictions. METHODS We used three sets of publicly available survival data consisting of patients with colon, breast, or pan cancer and compared the performance of CoxPH with the state-of-the-art ML model, XGBoost. To compute the HR for explanatory variables from the XGBoost model, the SHapley Additive exPlanation values were exponentiated and the ratio of the means over the two subgroups was calculated. The CI was computed via bootstrapping the training data and generating the ML model 1,000 times. Across the three data sets, we systematically compared HRs for all explanatory variables. Open-source libraries in Python and R were used in the analyses. RESULTS For the colon and breast cancer data sets, the performance of CoxPH and XGBoost was comparable, and we showed good consistency in the computed HRs. In the pan-cancer data set, we showed agreement in most variables but also an opposite finding in two of the explanatory variables between the CoxPH and XGBoost result. Subsequent Kaplan-Meier plots supported the finding of the XGBoost model. CONCLUSION Enabling the derivation of HR from ML models can help to improve the identification of risk factors from complex survival data sets and to enhance the prediction of clinical trial outcomes.


Author(s):  
Aastha Gupta ◽  
Himanshu Sharma ◽  
Anas Akhtar

Clustering is the process of arranging comparable data elements into groups. One of the most frequent data mining analytical techniques is clustering analysis; the clustering algorithm’s strategy has a direct influence on the clustering results. This study examines the many types of algorithms, such as k-means clustering algorithms, and compares and contrasts their advantages and disadvantages. This paper also highlights concerns with clustering algorithms, such as time complexity and accuracy, in order to give better outcomes in a variety of environments. The outcomes are described in terms of big datasets. The focus of this study is on clustering algorithms with the WEKA data mining tool. Clustering is the process of dividing a big data set into small groups or clusters. Clustering is an unsupervised approach that may be used to analyze big datasets with many characteristics. It’s a data-modeling technique that provides a clear image of your data. Two clustering methods, k-means and hierarchical clustering, are explained in this survey and their analysis using WEKA tool on different data sets. KEYWORDS: data clustering, weka , k-means, hierarchical clustering


2001 ◽  
Vol 24 (2) ◽  
pp. 295-296
Author(s):  
Dietrich Stout

Constraint has played a major role in brain evolution, but cannot tell the whole story. In primates, adaptive specialization is suggested by the existence of a covarying visual system, and may explain some residual variation in the constraint model. Adaptation may also appear at the microstructural level and in the globally integrated system of brain, body, life history and behavior.


2020 ◽  
Vol 15 (5) ◽  
pp. 1158-1177
Author(s):  
Jenna A. Harder

When analyzing data, researchers may have multiple reasonable options for the many decisions they must make about the data—for example, how to code a variable or which participants to exclude. Therefore, there exists a multiverse of possible data sets. A classic multiverse analysis involves performing a given analysis on every potential data set in this multiverse to examine how each data decision affects the results. However, a limitation of the multiverse analysis is that it addresses only data cleaning and analytic decisions, yet researcher decisions that affect results also happen at the data-collection stage. I propose an adaptation of the multiverse method in which the multiverse of data sets is composed of real data sets from studies varying in data-collection methods of interest. I walk through an example analysis applying the approach to 19 studies on shooting decisions to demonstrate the usefulness of this approach and conclude with a further discussion of the limitations and applications of this method.


Sign in / Sign up

Export Citation Format

Share Document