Geometric Optimization of Concentrating Solar Collectors using Monte Carlo Simulation

2010 ◽  
Vol 132 (4) ◽  
Author(s):  
A. J. Marston ◽  
K. J. Daun ◽  
M. R. Collins

This paper presents an optimization algorithm for designing linear concentrating solar collectors using stochastic programming. A Monte Carlo technique is used to quantify the performance of the collector design in terms of an objective function, which is then minimized using a modified Kiefer–Wolfowitz algorithm that uses sample size and step size controls. This process is more efficient than traditional “trial-and-error” methods and can be applied more generally than techniques based on geometric optics. The method is validated through application to the design of three different configurations of linear concentrating collector.

Author(s):  
A. J. Marston ◽  
K. J. Daun ◽  
M. R. Collins

This paper presents an optimization methodology for designing linear concentrating solar collectors. The proposed algorithm makes intelligent design updates to the collector surface geometry according to specialized numerical algorithms. The process is much more efficient than traditional “trial-and-error” methods, producing a final solution that is near-optimal. A Monte Carlo technique is used to quantify the performance of the collector design in terms of an objective function, which is then minimized using a modified Kiefer-Wolfowitz algorithm that uses sample size and step size controls. The methodology is applied to the design of a linear parabolic concentrating collector, successfully arriving at the known optimal solution.


2021 ◽  
Vol 3 (1) ◽  
pp. 61-89
Author(s):  
Stefan Geiß

Abstract This study uses Monte Carlo simulation techniques to estimate the minimum required levels of intercoder reliability in content analysis data for testing correlational hypotheses, depending on sample size, effect size and coder behavior under uncertainty. The ensuing procedure is analogous to power calculations for experimental designs. In most widespread sample size/effect size settings, the rule-of-thumb that chance-adjusted agreement should be ≥.80 or ≥.667 corresponds to the simulation results, resulting in acceptable α and β error rates. However, this simulation allows making precise power calculations that can consider the specifics of each study’s context, moving beyond one-size-fits-all recommendations. Studies with low sample sizes and/or low expected effect sizes may need coder agreement above .800 to test a hypothesis with sufficient statistical power. In studies with high sample sizes and/or high expected effect sizes, coder agreement below .667 may suffice. Such calculations can help in both evaluating and in designing studies. Particularly in pre-registered research, higher sample sizes may be used to compensate for low expected effect sizes and/or borderline coding reliability (e.g. when constructs are hard to measure). I supply equations, easy-to-use tables and R functions to facilitate use of this framework, along with example code as online appendix.


Author(s):  
Zhigang Wei ◽  
Limin Luo ◽  
Burt Lin ◽  
Dmitri Konson ◽  
Kamran Nikbin

Good durability/reliability performance of products can be achieved by properly constructing and implementing design curves, which are usually obtained by analyzing test data, such as fatigue S-N data. A good design curve construction approach should consider sample size, failure probability and confidence level, and these features are especially critical when test sample size is small. The authors have developed a design S-N curve construction method based on the tolerance limit concept. However, recent studies have shown that the analytical solutions based on the tolerance limit approach may not be accurate for very small sample size because of the assumptions and approximations introduced to the analytical approach. In this paper a Monte Carlo simulation approach is used to construct design curves for test data with an assumed underlining normal (or lognormal) distribution. The difference of factor K, which measures the confidence level of the test data, between the analytical solution and the Monte Carlo simulation solutions is compared. Finally, the design curves constructed based on these methods are demonstrated and compared using fatigue S-N data with small sample size.


2020 ◽  
Vol 4 (2) ◽  
pp. 350-364
Author(s):  
A. Shehu ◽  
N. S. Dauran

This paper assesses the performance of multivariate treatment tests (Wilk’s Lambda, Hoteling-lawley, Roy’s largest root and Pillai) on multivariate Sudoku square design models in terms of power analysis. Monte carlo simulation was conducted to compare the power of these four tests for the four multivariate Sudoku square design models. This study used  0.062 as interval value for Power difference between two tests of the same sample size. The test is considered powerful or having advantage, if the difference between the powers of the tests is   . The results of Power test show that Hoteling-lawley has advantage over three other tests at P=2 while at P=3 Wilk’s lambda test has power advantage over other tests in all the multivariate Sudoku models.


2000 ◽  
Vol 32 (2) ◽  
pp. 480-498 ◽  
Author(s):  
G. Yin

This work develops a class of stochastic global optimization algorithms that are Kiefer-Wolfowitz (KW) type procedures with an added perturbing noise and partial step size restarting. The motivation stems from the use of KW-type procedures and Monte Carlo versions of simulated annealing algorithms in a wide range of applications. Using weak convergence approaches, our effort is directed to proving the convergence of the underlying algorithms under general noise processes.


Gefahrstoffe ◽  
2019 ◽  
Vol 79 (11-12) ◽  
pp. 451-459
Author(s):  
W. Wosniok ◽  
S. Nickel ◽  
W. Schröder

Messen wir genug, um aus Stichprobenmessungen statistisch valide Kennwerte zu berechnen, oder messen wir – ohne weiteren Erkenntniszuwachs – zu viel? Diese Frage steht eigentlich im Zentrum jedes empirischen Messdesigns, wird aber im Umweltmonitoring selten untersucht. In dieser Studie wurde die für die Messnetzplanung des deutschen Moosmonitorings 2015 verwendete Methodik zur Ermittlung statistisch valider Mindestprobenzahlen (MPZ) weiterentwickelt. Diese dient der Berechnung des arithmetischen Mittelwertes unter Einhaltung bestimmter Genauigkeitsanforderungen für Daten, die weder normal noch lognormal verteilt sind. Kernelement des Verfahrens zur Abschätzung von MPZ ohne Voraussetzung an die Verteilung der Daten ist eine iterative Monte-Carlo-Simulation. Die Methodik besteht darin, unter Verwendung von Referenzdaten (frühere Messwerte) für eine Reihe von MPZ-Kandidatenwerten die Genauigkeitzu ermitteln, welche mit diesen erreicht werden würde. Anschließend wird dann aus einer nicht linearen Regression zwischen MPZ-Kandidaten und deren Genauigkeit die minimale MPZ berechnet, mit der die gestellte Genauigkeitsforderung erfüllt wird. Für die Berechnung der MPZ wurde das Programm Sample Size for Arbitrary Distributions (SSAD) in der offenen Programmiersprache R entwickelt. Das SSAD-Verfahren schließt eine Lücke in der bisherigen Methodik zur Berechnung statistisch valider Mindestprobenzahlen.


Sign in / Sign up

Export Citation Format

Share Document