Notes: Determining Sample Size in Throughfall Studies

1979 ◽  
Vol 25 (4) ◽  
pp. 582-584 ◽  
Author(s):  
D. L. Peterson ◽  
G. L. Rolfe

Abstract Variation in throughfall collection data is a major concern in nutrient cycling studies. In order to determine the magnitude of this variability in throughfall volume data collected in an oak-hickory stand in southern Illinois, regression equations were developed which indicate the sample size necessary for a specific level of variability. In addition to having predictive value, these equations indicate differences in variability on a seasonal basis. Forest Sci. 25:582-584.

2021 ◽  
Vol 99 (Supplement_1) ◽  
pp. 218-219
Author(s):  
Andres Fernando T Russi ◽  
Mike D Tokach ◽  
Jason C Woodworth ◽  
Joel M DeRouchey ◽  
Robert D Goodband ◽  
...  

Abstract The swine industry has been constantly evolving to select animals with improved performance traits and to minimize variation in body weight (BW) in order to meet packer specifications. Therefore, understanding variation presents an opportunity for producers to find strategies that could help reduce, manage, or deal with variation of pigs in a barn. A systematic review and meta-analysis was conducted by collecting data from multiple studies and available data sets in order to develop prediction equations for coefficient of variation (CV) and standard deviation (SD) as a function of BW. Information regarding BW variation from 16 papers was recorded to provide approximately 204 data points. Together, these data included 117,268 individually weighed pigs with a sample size that ranged from 104 to 4,108 pigs. A random-effects model with study used as a random effect was developed. Observations were weighted using sample size as an estimate for precision on the analysis, where larger data sets accounted for increased accuracy in the model. Regression equations were developed using the nlme package of R to determine the relationship between BW and its variation. Polynomial regression analysis was conducted separately for each variation measurement. When CV was reported in the data set, SD was calculated and vice versa. The resulting prediction equations were: CV (%) = 20.04 – 0.135 × (BW) + 0.00043 × (BW)2, R2=0.79; SD = 0.41 + 0.150 × (BW) - 0.00041 × (BW)2, R2 = 0.95. These equations suggest that there is evidence for a decreasing quadratic relationship between mean CV of a population and BW of pigs whereby the rate of decrease is smaller as mean pig BW increases from birth to market. Conversely, the rate of increase of SD of a population of pigs is smaller as mean pig BW increases from birth to market.


2019 ◽  
Author(s):  
Rob Cribbie ◽  
Nataly Beribisky ◽  
Udi Alter

Many bodies recommend that a sample planning procedure, such as traditional NHST a priori power analysis, is conducted during the planning stages of a study. Power analysis allows the researcher to estimate how many participants are required in order to detect a minimally meaningful effect size at a specific level of power and Type I error rate. However, there are several drawbacks to the procedure that render it “a mess.” Specifically, the identification of the minimally meaningful effect size is often difficult but unavoidable for conducting the procedure properly, the procedure is not precision oriented, and does not guide the researcher to collect as many participants as feasibly possible. In this study, we explore how these three theoretical issues are reflected in applied psychological research in order to better understand whether these issues are concerns in practice. To investigate how power analysis is currently used, this study reviewed the reporting of 443 power analyses in high impact psychology journals in 2016 and 2017. It was found that researchers rarely use the minimally meaningful effect size as a rationale for the chosen effect in a power analysis. Further, precision-based approaches and collecting the maximum sample size feasible are almost never used in tandem with power analyses. In light of these findings, we offer that researchers should focus on tools beyond traditional power analysis when sample planning, such as collecting the maximum sample size feasible.


2020 ◽  
Author(s):  
Kiyoshi Kubota ◽  
Masao Iwagami ◽  
Takuhiro Yamaguchi

Abstract Background:We propose and evaluate the approximation formulae for the 95% confidence intervals (CIs) of the sensitivity and specificity and a formula to estimate sample size in a validation study with stratified sampling where positive samples satisfying the outcome definition and negative samples that do not are selected with different extraction fractions. Methods:We used the delta method to derive the approximation formulae for estimating the sensitivity and specificity and their CIs. From those formulae, we derived the formula to estimate the size of negative samples required to achieve the intended precision and the formula to estimate the precision for a negative sample size arbitrarily selected by the investigator. We conducted simulation studies in a population where 4% were outcome definition positive, the positive predictive value (PPV)=0.8, and the negative predictive value (NPV)=0.96, 0.98 and 0.99. The size of negative samples, n0, was either selected to make the 95% CI fall within ± 0.1, 0.15 and 0.2 or set arbitrarily as 150, 300 and 600. We assumed a binomial distribution for the positive and negative samples. The coverage of the 95% CIs of the sensitivity and specificity was calculated as the proportion of CIs including the sensitivity and specificity in the population, respectively. For selected studies, the coverage was also estimated by the bootstrap method. The sample size was evaluated by examining whether the observed precision was within the pre-specified value.Results:For the sensitivity, the coverage of the approximated 95% CIs was larger than 0.95 in most studies but in 9 of 18 selected studies derived by the bootstrap method. For the specificity, the coverage of the approximated 95% CIs was approximately 0.93 in most studies, but the coverage was more than 0.95 in all 18 studies derived by the bootstrap method. The calculated size of negative samples yielded precisions within the pre-specified values in most of the studies.Conclusion:The approximation formulae for the 95% CIs of the sensitivity and specificity for stratified validation studies are presented. These formulae will help in conducting and analysing validation studies with stratified sampling.


2003 ◽  
Vol 86 (6) ◽  
pp. 1187-1192 ◽  
Author(s):  
Thomas B Whitaker ◽  
John L Richard ◽  
Francis G Giesbrecht ◽  
Andrew B Slate ◽  
Nelson Ruiz

Abstract To determine if deoxynivalenol (DON) is concentrated in small corn screenings, fourteen to twenty-three 1.1 kg test samples were taken from each of 10 barges of shelled corn. Each of the 181 test samples was divided into 2 components (fines and clean) using a 5 mm screen. The clean component sample rode the 5 mm screen and the fines component sample passed through the 5 mm screen. The DON concentration in fines component sample was about 3 times the DON concentration in the clean component sample. The DON in the 181 fines and clean component samples averaged 689.0 and 206.1 ng/g, respectively. Regression equations were developed to predict the DON in the barge based upon measurements of DON in the fines component sample. The ratio of DON in the lot to DON in the fines component sample was 0.359. The coefficient of variation (CV) associated with predicting the DON concentration in a lot with 359 ng/g using a 1.1 kg test sample was 47.0%. Increasing sample size to 4.4 kg reduced the CV to 23%.


2012 ◽  
Vol 166-169 ◽  
pp. 1958-1962
Author(s):  
Ping Jie Cheng

Many before studies showed that it was difficult to ensure the accuracy of assessing the amount of steel corrosion in the cracking concrete with artificial neural network [3] method while the study sample size was small. This paper introduces several different algorithms to assess the amount of steel corrosion in concrete. The experimental results show that compared with other algorithms, the predictive value of the support vector machine algorithm is the closest to the measured value.


2017 ◽  
Vol 56 (3) ◽  
Author(s):  
R. Bhatia ◽  
I. Serrano ◽  
H. Wennington ◽  
C. Graham ◽  
H. Cubie ◽  
...  

ABSTRACT The use of high-risk human papillomavirus (HPV) testing for surveillance and clinical applications is increasing globally, and it is important that tests are evaluated to ensure they are fit for this purpose. In this study, the performance of a new HPV genotyping test, the Papilloplex high-risk HPV (HR-HPV) test, was compared to two well-established genotyping tests. Preliminary clinical performance was also ascertained for the detection of CIN2+ in a disease-enriched retrospective cohort. A panel of 500 cervical liquid-based cytology samples with known clinical outcomes were tested by the Papilloplex HR-HPV test. Analytical concordance was compared to two assays: a Linear Array (LA) HPV genotyping test and an Optiplex HPV genotyping test. The initial clinical performance for the detection for CIN2+ samples was performed and compared to that of two clinically validated HPV tests: a RealTime High-Risk HPV test (RealTime) and a Hybrid Capture 2 HPV test (HC2). High agreement for HR-HPV was observed between the Papilloplex and LA and Optiplex HPV tests (97 and 95%, respectively), with kappa values for HPV16 and HPV18 being 0.90 and 0.81 compared to the LA and 0.70 and 0.82 compared to the Optiplex test. The sensitivity, specificity, positive predictive value, and negative predictive value of the Papilloplex test for the detection of CIN2+ were 92, 54, 33, and 96%, respectively, and very similar to the values observed with RealTime and HC2. The Papilloplex HR-HPV test demonstrated a analytical performance similar to those of the two HPV genotyping tests at the HR-HPV level and the type-specific level. The preliminary data on clinical performance look encouraging, although further longitudinal studies within screening populations are required to confirm these findings.


Author(s):  
Pamela Reinagel

AbstractAfter an experiment has been completed and analyzed, a trend may be observed that is “not quite significant”. Sometimes in this situation, researchers incrementally grow their sample size N in an effort to achieve statistical significance. This is especially tempting in situations when samples are very costly or time-consuming to collect, such that collecting an entirely new sample larger than N (the statistically sanctioned alternative) would be prohibitive. Such post-hoc sampling or “N-hacking” is condemned, however, because it leads to an excess of false positive results. Here Monte-Carlo simulations are used to show why and how incremental sampling causes false positives, but also to challenge the claim that it necessarily produces alarmingly high false positive rates. In a parameter regime that would be representative of practice in many research fields, simulations show that the inflation of the false positive rate is modest and easily bounded. But the effect on false positive rate is only half the story. What many researchers really want to know is the effect N-hacking would have on the likelihood that a positive result is a real effect that will be replicable: the positive predictive value (PPV). This question has not been considered in the reproducibility literature. The answer depends on the effect size and the prior probability of an effect. Although in practice these values are not known, simulations show that for a wide range of values, the PPV of results obtained by N-hacking is in fact higher than that of non-incremented experiments of the same sample size and statistical power. This is because the increase in false positives is more than offset by the increase in true positives. Therefore in many situations, adding a few samples to shore up a nearly-significant result is in fact statistically beneficial. In conclusion, if samples are added after an initial hypothesis test this should be disclosed, and if a p value is reported it should be corrected. But, contrary to widespread belief, collecting additional samples to resolve a borderline p value is not invalid, and can confer previously unappreciated advantages for efficiency and positive predictive value.


2019 ◽  
Author(s):  
Dirk Ostwald ◽  
Sebastian Schneider ◽  
Rasmus Bruckner ◽  
Lilla Horvath

AbstractRecent discussions on the reproducibility of task-related functional magnetic resonance imaging (fMRI) studies have emphasized the importance of power and sample size calculations in fMRI study planning. In general, statistical power and sample size calculations are dependent on the statistical inference framework that is used to test hypotheses. Bibliometric analyses suggest that random field theory (RFT)-based voxel- and cluster-level fMRI inference are the most commonly used approaches for the statistical evaluation of task-related fMRI data. However, general power and sample size calculations for these inference approaches remain elusive. Based on the mathematical theory of RFT-based inference, we here develop power and positive predictive value (PPV) functions for voxel- and cluster-level inference in both uncorrected single test and corrected multiple testing scenarios. Moreover, we apply the theoretical results to evaluate the sample size necessary to achieve desired power and PPV levels based on an fMRI pilot study.


Author(s):  
G. Ornguga, Ianngi ◽  
Nelson Jonah ◽  
V. Iornem, Tersoo ◽  
Ogojah, Teryila

This research entitled “Gender Relation between supervisor and subordinate” (A Study of First Bank of Nigeria Plc, Makurdi, Branch). It deals with the important aspects which a Supervisor performs on the Bank and also the Qualities of Gender Relations in the organization. The sample size of 110 was used. The questionnaire and oral interview was used for data collection. Data was presented in tables and a descriptive approach is adopted in analysis using chi square. The findings reviewed that; the bank should ensure access to workplace reporting mechanisms. From the hypothesis we concluded that there exist challenges confronting supervisor and subordinate in first Bank Makurdi branch and that there exist relationship between supervisor and subordinate on first Bank Makurdi branch which shows that female subordinates demonstrate more negative attitudes towards evaluation fairness and that male subordinate with a counterpart female supervisor put more trust in workplace than males with a male supervisor and females with a female supervisor.


2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Archibong E. Ironbar ◽  
Pius U. Angioha ◽  
Ijim A. Uno ◽  
Julius A. Ada ◽  
Francis E. Ibioro

The study examines Drivers of the Adoption of E-Government Services in the deliverance of Healthcare services in Federal Health Institutions. the study adopts drivers such as perceived Usefulness and perceived ease of use and their influence in the adoption of E-Government Services in the deliverance of Healthcare services in Federal Health Institutions. The survey research design was adopted in collecting 400 sample from administrative staff of the University of Calabar Teaching Hospital, Calabar using the purposive sampling. The sample size was determined using Taro Yamane sample size determinant technique. The questionnaire was the instrument of data collection. Data collected was analyzed using simple regression analysis at 0.05 confidence level. Result revealed that Perceived usefulness significantly influence the adoption of E- Governance Services in the deliverance of Healthcare services in Federal Health Institutions (R-value of 0.176a). Also result revealed that Perceived Ease of Use significantly influence the adoption of E- Governance Services in the deliverance of Healthcare services in Federal Health Institutions (R2 –value of .018). based on this result the study recommends amongst others that there is need for efforts to improve this basic infrastructure by the government should be strengthened both in terms of coverage and quality


Sign in / Sign up

Export Citation Format

Share Document