Response to “Negative Results Call for More Delicate Experimental Design in Cortical Excitability Change of taVNS Intervention”

2021 ◽  
Vol 24 (8) ◽  
pp. 1499-1500
Author(s):  
Ann Mertens ◽  
Kristl Vonck
2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Wolfgang Strube ◽  
Tilmann Bunse ◽  
Berend Malchow ◽  
Alkomiet Hasan

Interindividual response variability to various motor-cortex stimulation protocols has been recently reported. Comparative data of stimulation protocols with different modes of action is lacking. We aimed to compare the efficacy and response variability of two LTP-inducing stimulation protocols in the human motor cortex: anodal transcranial direct current stimulation (a-tDCS) and paired-associative stimulation (PAS25). In two experiments 30 subjects received 1mA a-tDCS and PAS25. Data analysis focused on motor-cortex excitability change and response defined as increase in MEP applying different cut-offs. Furthermore, the predictive pattern of baseline characteristics was explored. Both protocols induced a significant increase in motor-cortical excitability. In the PAS25 experiments the likelihood to develop a MEP response was higher compared to a-tDCS, whereas for intracortical facilitation (ICF) the likelihood for a response was higher in the a-tDCS experiments. Baseline ICF (12 ms) correlated positively with an increase in MEPs only following a-tDCS and responders had significantly higher ICF baseline values. Contrary to recent studies, we showed significant group-level efficacy following both stimulation protocols confirming older studies. However, we also observed a remarkable amount of nonresponders. Our findings highlight the need to define sufficient physiological read-outs for a given plasticity protocol and to develop predictive markers for targeted stimulation.


2011 ◽  
Vol 97 (3) ◽  
pp. 273-277 ◽  
Author(s):  
Mark P. Richardson ◽  
Fernando H. Lopes da Silva

1963 ◽  
Vol 13 (2) ◽  
pp. 619-621 ◽  
Author(s):  
James A. Dyal ◽  
Judith Abright

The experimental design was a modification of Hull's reasoning paradigm. Ss were given one forced trial per day in each of four runway segments. The trials to food, routes A-C and B-C, always preceded those to water, routes A-B and A-D. After 20 days of training under 22-hr. food and water deprivation Ss were placed under a “pure” hunger drive and given free choices between the A-B and A-D segments. The results required acceptance of the null hypothesis and thus failed to support the experimental hypothesis that rats are capable of the problem-solving assembly of separately acquired behavior segments.


Author(s):  
Brianna N Gaskill ◽  
Joseph P Garner

The practical application of statistical power is becoming an increasingly important part of experimental design, data analysis, and reporting. Power is essential to estimating sample size as part of planning studies and obtaining ethical approval for them. Furthermore, power is essential for publishing and interpreting negative results. In this manuscript, we review what power is, how it can be calculated, and reporting recommendations if a null result is found. Power can be thought of as reflecting the signal to noise ratio of an experiment. The conventional wisdom that statistical power is driven by sample size (which increases the signal in the data), while true, is a misleading oversimplification. Relatively little discussion covers the use of experimental designs which control and reduce noise. Even small improvements in experimental design can achieve high power at much lower sample sizes than (for instance) a simple t test. Failure to report experimental design or the proposed statistical test on animal care and use protocols creates a dilemma for IACUCs, because it is unknown whether sample size has been correctly calculated. Traditional power calculations, which are primarily provided for animal number justifications, are only available for simple, yet low powered, experimental designs, such as paired t tests. Thus, in most controlled experimental studies, the only analyses for which power can be calculated are those that inheriently have low statistical power; these analyses should not be used because they require more animals than necessary. We provide suggestions for more powerful experimental designs (such as randomized block and factorial designs) that increase power, and we describe methods to easily calculate sample size for these designs that are suitable for IACUC number justifications. Finally we also provide recommendations for reporting negative results, so that readers and reviewers can determine whether an experiment had sufficient power. The use of more sophisticated designs in animal experiments will inevitably improve power, reproducibility, and reduce animal use.


2020 ◽  
Author(s):  
Marco Esposito ◽  
Clarissa Ferrari ◽  
Claudia Fracassi ◽  
Carlo Miniussi ◽  
Debora Brignani

AbstractOver the past two decades, the postulated modulatory effects of transcranial direct current stimulation (tDCS) on the human brain have been extensively investigated, with attractive real-world applications. However, recent concerns on reliability of tDCS effects have been raised, principally due to reduced replicability and to the great interindividual variability in response to tDCS. These inconsistencies are likely due to the interplay between the level of induced cortical excitability and unaccounted individual state-dependent factors. On these grounds, we aimed to verify whether the behavioural effects induced by a common prefrontal tDCS montage were dependent on the participants’ arousal levels. Pupillary dynamics were recorded during an auditory oddball task while applying either a sham or real tDCS. The tDCS effects on reaction times and pupil dilation were evaluated as a function of subjective and physiological arousal predictors. Both predictors significantly explained performance during real tDCS, namely reaction times improved only with moderate arousal levels; likewise, pupil dilation was affected according to the ongoing levels of arousal. These findings highlight the critical role of arousal in shaping the neuromodulatory outcome, and thus encourage a more careful interpretation of null or negative results.


2018 ◽  
Vol 41 ◽  
Author(s):  
Wei Ji Ma

AbstractGiven the many types of suboptimality in perception, I ask how one should test for multiple forms of suboptimality at the same time – or, more generally, how one should compare process models that can differ in any or all of the multiple components. In analogy to factorial experimental design, I advocate for factorial model comparison.


2019 ◽  
Vol 42 ◽  
Author(s):  
J. Alfredo Blakeley-Ruiz ◽  
Carlee S. McClintock ◽  
Ralph Lydic ◽  
Helen A. Baghdoyan ◽  
James J. Choo ◽  
...  

Abstract The Hooks et al. review of microbiota-gut-brain (MGB) literature provides a constructive criticism of the general approaches encompassing MGB research. This commentary extends their review by: (a) highlighting capabilities of advanced systems-biology “-omics” techniques for microbiome research and (b) recommending that combining these high-resolution techniques with intervention-based experimental design may be the path forward for future MGB research.


1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


2011 ◽  
Vol 20 (4) ◽  
pp. 109-113
Author(s):  
Karen Copple ◽  
Rajinder Koul ◽  
Devender Banda ◽  
Ellen Frye

Abstract One of the instructional techniques reported in the literature to teach communication skills to persons with autism is video modeling (VM). VM is a form of observational learning that involves watching and imitating the desired target behavior(s) exhibited by the person on the videotape. VM has been used to teach a variety of social and communicative behaviors to persons with developmental disabilities such as autism. In this paper, we describe the VM technique and summarize the results of two single-subject experimental design studies that investigated the acquisition of spontaneous requesting skills using a speech generating device (SGD) by persons with autism following a VM intervention. The results of these two studies indicate that a VM treatment package that includes a SGD as one of its components can be effective in facilitating communication in individuals with autism who have little or no functional speech.


1999 ◽  
Vol 4 (4) ◽  
pp. 4-4

Abstract Symptom validity testing, also known as forced-choice testing, is a way to assess the validity of sensory and memory deficits, including tactile anesthesias, paresthesias, blindness, color blindness, tunnel vision, blurry vision, and deafness—the common feature of which is a claimed inability to perceive or remember a sensory signal. Symptom validity testing comprises two elements: A specific ability is assessed by presenting a large number of items in a multiple-choice format, and then the examinee's performance is compared with the statistical likelihood of success based on chance alone. Scoring below a norm can be explained in many different ways (eg, fatigue, evaluation anxiety, limited intelligence, and so on), but scoring below the probabilities of chance alone most likely indicates deliberate deception. The positive predictive value of the symptom validity technique likely is quite high because there is no alternative explanation to deliberate distortion when performance is below the probability of chance. The sensitivity of this technique is not likely to be good because, as with a thermometer, positive findings indicate that a problem is present, but negative results do not rule out a problem. Although a compelling conclusion is that the examinee who scores below probabilities is deliberately motivated to perform poorly, malingering must be concluded from the total clinical context.


Sign in / Sign up

Export Citation Format

Share Document