scholarly journals Developing a Scalable Dynamic Norm Menu-Based Intervention to Reduce Meat Consumption

2020 ◽  
Vol 12 (6) ◽  
pp. 2453 ◽  
Author(s):  
Gregg Sparkman ◽  
Elizabeth Weitz ◽  
Thomas N. Robinson ◽  
Neil Malhotra ◽  
Gregory M. Walton

How can we curb the current norm of unsustainable levels of meat consumption? Research on dynamic norms finds that learning that others are starting to eat less meat can inspire people to follow suit. Across four field experiments, we test efforts to scale dynamic-norm messages by incorporating them into restaurant and web-based menus. Studies 1–3 find increases in vegetarian orders when dynamic norms are included in menus (1–2.5 percentage points), although this effect does not always reach statistical significance and varies across populations and analytic models. In Study 4, dynamic norms significantly reduced vegetarian orders. These results raise two critical questions. First, where and with whom should a dynamic norm message reduce meat consumption? Our field data and past theory point to non-high socioeconomic contexts, and contexts where the reference group of people who have changed is meaningful to consumers. Second, how can the treatment be strengthened? Over five online experiments, we find that the visibility of the messages can be greatly improved, and more relatable norm referents can be selected. Although impacts on food orders appear modest, the minimal costs of scaling menu-based dynamic norm messages and the possibility of improving effect sizes make this a promising approach.

2021 ◽  
pp. 1-6
Author(s):  
David M. Garner ◽  
Gláucia S. Barreto ◽  
Vitor E. Valenti ◽  
Franciele M. Vanderlei ◽  
Andrey A. Porto ◽  
...  

Abstract Introduction: Approximate Entropy is an extensively enforced metric to evaluate chaotic responses and irregularities of RR intervals sourced from an eletrocardiogram. However, to estimate their responses, it has one major problem – the accurate determination of tolerances and embedding dimensions. So, we aimed to overt this potential hazard by calculating numerous alternatives to detect their optimality in malnourished children. Materials and methods: We evaluated 70 subjects split equally: malnourished children and controls. To estimate autonomic modulation, the heart rate was measured lacking any physical, sensory or pharmacologic stimuli. In the time series attained, Approximate Entropy was computed for tolerance (0.1→0.5 in intervals of 0.1) and embedding dimension (1→5 in intervals of 1) and the statistical significances between the groups by their Cohen’s ds and Hedges’s gs were totalled. Results: The uppermost value of statistical significance accomplished for the effect sizes for any of the combinations was −0.2897 (Cohen’s ds) and −0.2865 (Hedges’s gs). This was achieved with embedding dimension = 5 and tolerance = 0.3. Conclusions: Approximate Entropy was able to identify a reduction in chaotic response via malnourished children. The best values of embedding dimension and tolerance of the Approximate Entropy to identify malnourished children were, respectively, embedding dimension = 5 and embedding tolerance = 0.3. Nevertheless, Approximate Entropy is still an unreliable mathematical marker to regulate this.


2015 ◽  
Vol 18 (6) ◽  
pp. 539-559 ◽  
Author(s):  
Mattie Toma

Choking under pressure represents a phenomenon in which individuals faced with a high-pressure situation do not perform as well as would be expected were they performing under normal conditions. In this article, I identify determinants that predict a basketball player’s susceptibility to choking under pressure. Identification of these determinants adds to our understanding of players’ psychology at pivotal points in the game. My analysis draws on play-by-play data from ESPN.com that feature over 2 million free-throw attempts in women’s and men’s college and professional basketball games from the 2002-2013 seasons. Using regression analysis, I explore the impact of both gender and level of professionalism on performance in high-pressure situations. I find that in the final 30 seconds of a tight game, Women’s National Basketball Association and National Basketball Association players are 5.81 and 3.11 percentage points, respectively, less likely to make a free throw, while female and male college players are 2.25 and 2.09 percentage points, respectively, less likely to make a free throw, though statistical significance cannot be established among National Collegiate Athletic Association women. The discrepancy in choking between college and professional players is pronounced when comparing male college players who do and do not make it to the professional level; the free-throw performance of those destined to go pro falls 6 percentage points more in high-pressure situations. Finally, I find that women and men do not differ significantly in their propensity to choke.


Author(s):  
María Vicent ◽  
Cándido J. Inglés ◽  
Carolina Gonzálvez ◽  
Ricardo Sanmartín ◽  
José Manuel García-Fernández

The aim of this study was to examine the relationship between Socially Prescribed Perfectionism (SPP) and the Big Five personality traits in a sample of 804 Primary School students between 8 and 11 years old (M=9.57; SD=1.12). The SPP subscale of the Child and Adolescent Perfectionism Scale (CAPS) and the Big Five Questionnaire for Children (BFQ-N), which evaluate the traits of Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness, were used. The mean difference analysis showed that students with high levels of SPP scored significantly higher on Conscientiousness, Agreeableness, Extraversion and Openness, with small effect sizes for all cases. In contrast, no significant differences were observed in Neuroticism. Logistic regression analysis revealed that all personality traits, except neuroticism, whose results didn’t reach the statistical significance, significantly and positively predicted higher scores on PSP, with OR levels ranging from 1.01 (for Conscientiousness and Agreeableness) to 1.03 (for Openness and Extraversion).


2021 ◽  
pp. 1-11
Author(s):  
Valentina Escott-Price ◽  
Karl Michael Schmidt

<b><i>Background:</i></b> Genome-wide association studies (GWAS) were successful in identifying SNPs showing association with disease, but their individual effect sizes are small and require large sample sizes to achieve statistical significance. Methods of post-GWAS analysis, including gene-based, gene-set and polygenic risk scores, combine the SNP effect sizes in an attempt to boost the power of the analyses. To avoid giving undue weight to SNPs in linkage disequilibrium (LD), the LD needs to be taken into account in these analyses. <b><i>Objectives:</i></b> We review methods that attempt to adjust the effect sizes (β<i>-</i>coefficients) of summary statistics, instead of simple LD pruning. <b><i>Methods:</i></b> We subject LD adjustment approaches to a mathematical analysis, recognising Tikhonov regularisation as a framework for comparison. <b><i>Results:</i></b> Observing the similarity of the processes involved with the more straightforward Tikhonov-regularised ordinary least squares estimate for multivariate regression coefficients, we note that current methods based on a Bayesian model for the effect sizes effectively provide an implicit choice of the regularisation parameter, which is convenient, but at the price of reduced transparency and, especially in smaller LD blocks, a risk of incomplete LD correction. <b><i>Conclusions:</i></b> There is no simple answer to the question which method is best, but where interpretability of the LD adjustment is essential, as in research aiming at identifying the genomic aetiology of disorders, our study suggests that a more direct choice of mild regularisation in the correction of effect sizes may be preferable.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Manojkumar Choudhary ◽  
Roma Solomon ◽  
Jitendra Awale ◽  
Rina Dey ◽  
Jagajeet Prasad Singh ◽  
...  

Abstract Background A social mobilization (SM) initiative contributed to India’s success in polio elimination. This was the CORE Group Polio Project (CGPP) India, a partner of the Uttar Pradesh (UP) SM Network and which continued its SM activities, even during the polio-free period through a network of multi-level social mobilizers. This paper assesses the effects of this community-level SM (CLSM) intervention on the extent of community engagement and performance of polio Supplementary Immunization Activity campaigns (SIAs) during the post-polio-endemic period (i.e., from March 2012 to September 2017). Methods This study followed a quasi-experimental design. We used secondary, cluster-level data from CGPP India’s Management Information System, including 52 SIAs held from January 2008 to September 2017, covering 56 blocks from 12 districts of UP. We computed various indicators and performed Generalized Estimating Equations based analysis to assess the statistical significance of differences between the outcomes of intervention and non-intervention areas. We then estimated the effects of the SM intervention using Interrupted time-series, Difference-in-Differences and Synthetic Control Methods. Finally, we estimated the population influenced by the intervention. Results The performance of polio SIAs changed over time, with the intervention areas having better outcomes than non-intervention areas. The absence of CLSM intervention during the post-polio-endemic period would have negatively impacted the outcomes of polio SIAs. The percentage of children vaccinated at polio SIA booths, percentage of ‘X’ houses (i.e., households with unvaccinated children or households with out-of-home/out-of-village children or locked households) converted to ‘P’ (i.e., households with all vaccinated children or households without children eligible for vaccination), and percentage of resistant houses converted to polio acceptors would have gone down by 14.1 (Range: 12.7 to 15.5), 6.3 (Range: 5.2 to 7.3) and 7.4 percentage points, respectively. Community engagement would have reduced by 7.2 (Range: 6.6 to 7.7) percentage points. Conclusions The absence of CLSM intervention would have significantly decreased the level of community engagement and negatively impacted the performance of polio SIAs of the post-polio-endemic period. The study provides evidence of an added value of deploying additional human resource dedicated to social mobilization to achieve desired vaccination outcomes in hard-to-reach or programmatically challenging areas.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Mayowa Owolabi ◽  
FRED S SARFO ◽  
Onoja Akpa ◽  
Joshua Akinyemi ◽  
Albert Akpalu ◽  
...  

Background: Age is a non-modifiable risk factor for stroke occurrence due its influence on vascular risk factor acquisition. In sub-Saharan Africa, the effect sizes of vascular risk factors for stroke occurrence by age is unknown. Objective: To quantify the magnitude and direction of the effect sizes of key modifiable risk factors of stroke according to three age groups: <50years(young), 50-65 years(middle age) and >65 years(elderly) in West Africa. Methods: The Stroke Investigative Research and Educational Network (SIREN) is a multicenter, case-control study involving 15 sites in Ghana and Nigeria. Cases include adults aged ≥18 years with evidence of an acute stroke. Controls were age-and-gender matched stroke-free adults. Detailed evaluations for vascular, lifestyle, stroke severity and outcomes were performed. We used conditional logistic regression to estimate adjusted odds ratios (aOR) of vascular risk factors of stroke. Results: Among 3,553 stroke cases, 813(22.9%) were young, 1441(40.6%) were middle-aged and 1299(36.6%) were elderly. Five modifiable risk factors were consistently associated with stroke occurrence regardless of age namely hypertension, dyslipidemia, diabetes mellitus, regular meat consumption and non-consumption of green vegetables. Among the 5 co-shared risk factors, the effect size, aOR(95%CI) of dyslipidemia, 4.13(2.64-6.46), was highest among the young age group, hypertension, 28.93(15.10-55.44) and non-consumption of vegetables 2.34(1.70-3.23) was highest among the middle-age group while diabetes, aOR of 3.50(2.48-4.95) and meat consumption, 2.40(1.76-3.26) were highest among the elderly age group. Additionally, among the young age group cigarette smoking and cardiac disease were associated with stroke. Furthermore, physical inactivity and salt intake were associated with stroke in the middle-age group while cardiac disease was associated with stroke in the elderly age group. Conclusions: Age has a profound influence on the profile, magnitude and direction of effect sizes of vascular risk factors for stroke occurrence among West Africans. Population-level prevention of stroke must target both co-shared dominant risk factors as well as factors that are unique to specific age bands in Africa.


2018 ◽  
Vol 21 (10) ◽  
pp. 1835-1844 ◽  
Author(s):  
Roni A Neff ◽  
Danielle Edwards ◽  
Anne Palmer ◽  
Rebecca Ramsing ◽  
Allison Righter ◽  
...  

AbstractObjectiveExcess meat consumption, particularly of red and processed meats, is associated with nutritional and environmental health harms. While only a small portion of the population is vegetarian, surveys suggest many Americans may be reducing their meat consumption. To inform education campaigns, more information is needed about attitudes, perceptions, behaviours and foods eaten in meatless meals.DesignA web-based survey administered in April 2015 assessed meat reduction behaviours, attitudes, what respondents ate in meatless meals and sociodemographic characteristics.SettingNationally representative, web-based survey in the USA.SubjectsUS adults (n 1112) selected from GfK Knowledgeworks’ 50 000-member online panel. Survey weights were used to assure representativeness.ResultsTwo-thirds reported reducing meat consumption in at least one category over three years, with reductions of red and processed meat most frequent. The most common reasons for reduction were cost and health; environment and animal welfare lagged. Non-meat reducers commonly agreed with statements suggesting that meat was healthy and ‘belonged’ in the diet. Vegetables were most often consumed ‘always’ in meatless meals, but cheese/dairy was also common. Reported meat reduction was most common among those aged 45–59 years and among those with lower incomes.ConclusionsThe public and environmental health benefits of reducing meat consumption create a need for campaigns to raise awareness and contribute to motivation for change. These findings provide rich information to guide intervention development, both for the USA and other high-income countries that consume meat in high quantities.


Author(s):  
H. S. Styn ◽  
S. M. Ellis

The determination of significance of differences in means and of relationships between variables is of importance in many empirical studies. Usually only statistical significance is reported, which does not necessarily indicate an important (practically significant) difference or relationship. With studies based on probability samples, effect size indices should be reported in addition to statistical significance tests in order to comment on practical significance. Where complete populations or convenience samples are worked with, the determination of statistical significance is strictly speaking no longer relevant, while the effect size indices can be used as a basis to judge significance. In this article attention is paid to the use of effect size indices in order to establish practical significance. It is also shown how these indices are utilized in a few fields of statistical application and how it receives attention in statistical literature and computer packages. The use of effect sizes is illustrated by a few examples from the research literature.


Author(s):  
Adam Thomas Biggs ◽  
Hugh M. Dainer ◽  
Lanny F Littlejohn

Hyperbaric oxygen therapy has been proposed as a method to treat traumatic brain injuries. The combination of pressure and increased oxygen concentration produces a higher content of dissolved oxygen in the bloodstream, which could generate a therapeutic benefit for brain injuries. This dissolved oxygen penetrates deeper into damaged brain tissue than otherwise possible and promotes healing. The result includes improved cognitive functioning and an alleviation of symptoms. However, randomized controlled trials have failed to produce consistent conclusions across multiple studies. There are numerous explanations that might account for the mixed evidence, although one possibility is that prior evidence focuses primarily on statistical significance. The current analyses explored existing evidence by calculating an effect size from each active treatment group and each control group among previous studies. An effect size measure offers several advantages when comparing across studies as it can be used to directly contrast evidence from different scales, and it provides a proximal measure of clinical significance. When exploring the therapeutic benefit through effect sizes, there was a robust and consistent benefit to individuals who underwent hyperbaric oxygen therapy. Placebo effects from the control condition could account for approximately one-third of the observed benefits, but there appeared to be a clinically significant benefit to using hyperbaric oxygen therapy as a treatment intervention for traumatic brain injuries. This evidence highlights the need for design improvements when exploring interventions for traumatic brain injury as well as the importance of focusing on clinical significance in addition to statistical significance.


Author(s):  
Valentin Amrhein ◽  
Fränzi Korner-Nievergelt ◽  
Tobias Roth

The widespread use of 'statistical significance' as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process (American Statistical Association, Wasserstein & Lazar 2016). We review why degrading p-values into 'significant' and 'nonsignificant' contributes to making studies irreproducible, or to making them seem irreproducible. A major problem is that we tend to take small p-values at face value, but mistrust results with larger p-values. In either case, p-values can tell little about reliability of research, because they are hardly replicable even if an alternative hypothesis is true. Also significance (p≤0.05) is hardly replicable: at a realistic statistical power of 40%, given that there is a true effect, only one in six studies will significantly replicate the significant result of another study. Even at a good power of 80%, results from two studies will be conflicting, in terms of significance, in one third of the cases if there is a true effect. This means that a replication cannot be interpreted as having failed only because it is nonsignificant. Many apparent replication failures may thus reflect faulty judgement based on significance thresholds rather than a crisis of unreplicable research. Reliable conclusions on replicability and practical importance of a finding can only be drawn using cumulative evidence from multiple independent studies. However, applying significance thresholds makes cumulative knowledge unreliable. One reason is that with anything but ideal statistical power, significant effect sizes will be biased upwards. Interpreting inflated significant results while ignoring nonsignificant results will thus lead to wrong conclusions. But current incentives to hunt for significance lead to publication bias against nonsignificant findings. Data dredging, p-hacking and publication bias should be addressed by removing fixed significance thresholds. Consistent with the recommendations of the late Ronald Fisher, p-values should be interpreted as graded measures of the strength of evidence against the null hypothesis. Also larger p-values offer some evidence against the null hypothesis, and they cannot be interpreted as supporting the null hypothesis, falsely concluding that 'there is no effect'. Information on possible true effect sizes that are compatible with the data must be obtained from the observed effect size, e.g., from a sample average, and from a measure of uncertainty, such as a confidence interval. We review how confusion about interpretation of larger p-values can be traced back to historical disputes among the founders of modern statistics. We further discuss potential arguments against removing significance thresholds, such as 'we need more stringent decision rules', 'sample sizes will decrease' or 'we need to get rid of p-values'.


Sign in / Sign up

Export Citation Format

Share Document