scholarly journals Small-sized reverberation chamber for the measurement of sound absorption

2017 ◽  
Vol 67 (328) ◽  
pp. 139 ◽  
Author(s):  
R. Del Rey ◽  
J. Alba ◽  
L. Bertó ◽  
A. Gregori

This paper presents the design, construction, calibration and automation of a reverberation chamber for small samples. A balance has been sought between reducing sample size, to reduce the manufacturing costs of materials, and finding the appropriate volume of the chamber, to obtain reliable values at high and mid frequencies. The small-sized reverberation chamber, that was built, has a volume of 1.12 m3 and allows for the testing of samples of 0.3 m2. By using diffusers, to improve the diffusion degree, and automating measurements, we were able to improve the reliability of the results, thus reducing test errors. Several comparison studies of the measurements of the small-sized reverberation chamber and the standardised reverberation chamber are shown, and a good degree of adjustment can be seen between them, within the range of valid frequencies. This paper presents a small laboratory for comparing samples and making decisions before the manufacturing of larger sizes.

Author(s):  
Les Beach

To test the efficacy of the Personal Orientation Inventory in assessing growth in self-actualization in relation to encounter groups and to provide a more powerful measure of such changes, pre- and posttest data from 3 highly comparable encounter groups (N = 43) were combined for analysis. Results indicated that the Personal Orientation Inventory is a sensitive instrument for assessing personal growth in encounter groups and that a larger total sample size provides more significant results than those reported for small samples (e. g., fewer than 15 participants).


2011 ◽  
Vol 6 (2) ◽  
pp. 252-277 ◽  
Author(s):  
Stephen T. Ziliak

AbstractStudent's exacting theory of errors, both random and real, marked a significant advance over ambiguous reports of plant life and fermentation asserted by chemists from Priestley and Lavoisier down to Pasteur and Johannsen, working at the Carlsberg Laboratory. One reason seems to be that William Sealy Gosset (1876–1937) aka “Student” – he of Student'st-table and test of statistical significance – rejected artificial rules about sample size, experimental design, and the level of significance, and took instead an economic approach to the logic of decisions made under uncertainty. In his job as Apprentice Brewer, Head Experimental Brewer, and finally Head Brewer of Guinness, Student produced small samples of experimental barley, malt, and hops, seeking guidance for industrial quality control and maximum expected profit at the large scale brewery. In the process Student invented or inspired half of modern statistics. This article draws on original archival evidence, shedding light on several core yet neglected aspects of Student's methods, that is, Guinnessometrics, not discussed by Ronald A. Fisher (1890–1962). The focus is on Student's small sample, economic approach to real error minimization, particularly in field and laboratory experiments he conducted on barley and malt, 1904 to 1937. Balanced designs of experiments, he found, are more efficient than random and have higher power to detect large and real treatment differences in a series of repeated and independent experiments. Student's world-class achievement poses a challenge to every science. Should statistical methods – such as the choice of sample size, experimental design, and level of significance – follow the purpose of the experiment, rather than the other way around? (JEL classification codes: C10, C90, C93, L66)


PEDIATRICS ◽  
1989 ◽  
Vol 83 (3) ◽  
pp. A72-A72
Author(s):  
Student

The believer in the law of small numbers practices science as follows: 1. He gambles his research hypotheses on small samples without realizing that the odds against him are unreasonably high. He overestimates power. 2. He has undue confidence in early trends (e.g., the data of the first few subjects) and in the stability of observed patterns (e.g., the number and identity of significant results). He overestimates significance. 3. In evaluating replications, his or others', he has unreasonably high expectations about the replicability of significant results. He underestimates the breadth of confidence intervals. 4. He rarely attributes a deviation of results from expectations to sampling variability, because he finds a causal "explanation" for any discrepancy. Thus, he has little opportunity to recognize sampling variation in action. His belief in the law of small numbers, therefore, will forever remain intact.


Author(s):  

Исследование акустических характеристик строительных материалов и конструкций необходимо для комфортной жизни людей в городских условиях. Несмотря на то что в нормативных документах развитых государств есть ряд методов их определения, единого понимания самих характеристик и путей их адекватного определения до сих пор нет. В настоящей работе мы попытались систематизировать и дать критический обзор нормативных документов, содержащих методы определения звукопоглощающих свойств материалов и звукоизоляционных характеристик конструкций. Впервые выявлены присущие разным методам закономерности определения характеристик звукопоглощения и звукоизоляции. Определены наиболее характерные показатели звукопоглощения. Произведено ранжирование по коэффициенту уменьшения шума NRC наиболее часто применяемых строительных материалов. Названы области применения рассматриваемых методик, проанализированы их преимущества и ограничения. Ключевые слова: коэффициент уменьшения шума, звукопоглотитель, импедансная труба, реверберационная камера The study of the acoustic characteristics of building materials and structures is necessary for provision of comfortable life of people in urban conditions. Despite the fact that in the normative documents of developed countries there are a number of methods for their determination, there is still no common understanding of the characteristics themselves and the ways of their adequate determination. In this work we tried to systematize and give a critical review of regulatory documents containing the methods for determining the sound-absorbing properties of materials and the sound-insulating characteristics of structures. For the first time the regularities in determination the characteristics of sound absorption and sound insulation inherent in different methods have been revealed. The most characteristic indicators of sound absorption have been determined. The ranking of the most commonly used building materials was made according to the noise reduction factor (NRC). The areas of application of the considered methods are presented, their advantages and limitations are analyzed. Keywords: noise reduction factor, sound absorber, impedance tube, reverberation chamber


Author(s):  
Ken Kobayashi ◽  
Naoki Hamada ◽  
Akiyoshi Sannai ◽  
Akinori Tanaka ◽  
Kenichi Bannai ◽  
...  

Multi-objective optimization problems require simultaneously optimizing two or more objective functions. Many studies have reported that the solution set of an M-objective optimization problem often forms an (M − 1)-dimensional topological simplex (a curved line for M = 2, a curved triangle for M = 3, a curved tetrahedron for M = 4, etc.). Since the dimensionality of the solution set increases as the number of objectives grows, an exponentially large sample size is needed to cover the solution set. To reduce the required sample size, this paper proposes a Bézier simplex model and its fitting algorithm. These techniques can exploit the simplex structure of the solution set and decompose a high-dimensional surface fitting task into a sequence of low-dimensional ones. An approximation theorem of Bézier simplices is proven. Numerical experiments with synthetic and real-world optimization problems demonstrate that the proposed method achieves an accurate approximation of high-dimensional solution sets with small samples. In practice, such an approximation will be conducted in the postoptimization process and enable a better trade-off analysis.


2017 ◽  
Vol 17 (9) ◽  
pp. 1623-1629 ◽  
Author(s):  
Berry Boessenkool ◽  
Gerd Bürger ◽  
Maik Heistermann

Abstract. High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.


2002 ◽  
Vol 112 (5) ◽  
pp. 2397-2397
Author(s):  
Ranny Nascimento ◽  
Moyses Zindeluk ◽  
Jose Flavio Feiteira

2020 ◽  
Vol 27 (1) ◽  
pp. 143-163
Author(s):  
Juan M. Villela-Suárez ◽  
◽  
Oscar A. Aguirre-Calderón ◽  
Eduardo J. Treviño-Garza ◽  
Marco A. González-Tagle ◽  
...  

Introduction: The choice of sample size is an important decision in the development of volume models and taper functions. Objective: To calculate the minimum sample size required for fitting compatible taper-volume functions for Pinus arizonica Engelm., P. durangensis Martínez and P. engelmannii Carr. in Chihuahua. Materials and methods: The methodology was divided into three phases: (i) fitting of a linear regression model to the diameter-height data of 50 trees of each species in the three forest regions; (ii) calculation of the minimum sample size required, and (iii) comparison of the goodness of fit of the taper-volume function using both sample sizes. Results and discussion: The minimum number of trees calculated ranged from 53 (Pinus durangensis) to 88 (P. engelmannii) and it is located in the interval reported in studies carried out to estimate the optimal sample size for the development of taper functions. No significant differences were observed in the goodness of fit (α = 0.05) in terms of the R 2 and the root mean square error, using the full sample size and the calculated minimum sample size; no significant effect was observed in the stem volume estimates. Conclusion: The use of small samples in the fit of taper-volume models generates accurate estimates if adequate representation of the study population is ensured.


Methodology ◽  
2007 ◽  
Vol 3 (3) ◽  
pp. 89-99 ◽  
Author(s):  
A. Palmer ◽  
J.M. Losilla ◽  
J. Vives ◽  
R. Jiménez

Abstract. This simulation study compares different strategies to solve the problem of underestimating standard errors in the Poisson regression model when overdispersion is present. The study analyses the importance of sample size, Poisson distribution mean, and dispersion parameter in choosing the best index or estimate. Results show that standard error (SE) estimates obtained by resampling (nonparametric bootstrap and jackknife) are the least biased, followed by the direct index based on the χ2, and the so-called robust indexes, in third place. Nevertheless, the inefficiency of resampling estimates is also evident, especially in small samples.


Sign in / Sign up

Export Citation Format

Share Document