Quantitative analyses of spectral measurement error based on Monte-Carlo simulation

2015 ◽  
Author(s):  
Jingying Jiang ◽  
Congcong Ma ◽  
Qi Zhang ◽  
Junsheng Lu ◽  
Kexin Xu
2018 ◽  
Author(s):  
Andre Kretzschmar ◽  
Gilles Gignac

We conducted a Monte-Carlo simulation within a latent variable framework by varying the following characteristics: population correlation (ρ = .10, .20, .30, .40, .50, .60, .70, .80, .90, and 1.00) and composite score reliability (coefficient omega: ω = .40, .50, .60, .70, .80, and .90). The sample sizes required to estimate stable measurement-error-free correlations were found to approach N = 490 for typical research scenarios (population correlation ρ = .20; composite score reliability ω = .70) and as high as N = 1,000+ for data associated with lower, but still sometimes observed, reliabilities (ω = .40 to .50). We encourage researchers to take into consideration reliability, when evaluating the sample sizes required to produce stable measurement-error-free correlations.


2017 ◽  
Author(s):  
Marko Bachl ◽  
Michael Scharkow

Conducting and reporting reliability tests has become a standard practice in content analytical research. However, the consequences of measurement error in coding data are rarely discussed or taken into consideration in subsequent analyses. In this article, we demonstrate how misclassification in content analysis leads to biased estimates and introduce matrix back-calculation as a simple remedy. Using Monte Carlo simulation, we investigate how different ways of collecting information about the misclassification process influence the effectiveness of error correction under varying conditions. The results show that error correction with an adequate set-up can often substantially reduce bias. We conclude with an illustrative example, extensions of the procedure, and some recommendations.


2013 ◽  
Vol 21 (2) ◽  
pp. 252-265 ◽  
Author(s):  
Simon Hug

An increasing number of analyses in various subfields of political science employ Boolean algebra as proposed by Ragin's qualitative comparative analysis (QCA). This type of analysis is perfectly justifiable if the goal is to test deterministic hypotheses under the assumption of error-free measures of the employed variables. My contention is, however, that only in a very few research areas are our theories sufficiently advanced to yield deterministic hypotheses. Also, given the nature of our objects of study, error-free measures are largely an illusion. Hence, it is unsurprising that many studies employ QCA inductively and gloss over possible measurement errors. In this article, I address these issues and demonstrate the consequences of these problems with simple empirical examples. In an analysis similar to Monte Carlo simulation, I show that using Boolean algebra in an exploratory fashion without considering possible measurement errors may lead to dramatically misleading inferences. I then suggest remedies that help researchers to circumvent some of these pitfalls.


Author(s):  
Ryuichi Shimizu ◽  
Ze-Jun Ding

Monte Carlo simulation has been becoming most powerful tool to describe the electron scattering in solids, leading to more comprehensive understanding of the complicated mechanism of generation of various types of signals for microbeam analysis.The present paper proposes a practical model for the Monte Carlo simulation of scattering processes of a penetrating electron and the generation of the slow secondaries in solids. The model is based on the combined use of Gryzinski’s inner-shell electron excitation function and the dielectric function for taking into account the valence electron contribution in inelastic scattering processes, while the cross-sections derived by partial wave expansion method are used for describing elastic scattering processes. An improvement of the use of this elastic scattering cross-section can be seen in the success to describe the anisotropy of angular distribution of elastically backscattered electrons from Au in low energy region, shown in Fig.l. Fig.l(a) shows the elastic cross-sections of 600 eV electron for single Au-atom, clearly indicating that the angular distribution is no more smooth as expected from Rutherford scattering formula, but has the socalled lobes appearing at the large scattering angle.


Author(s):  
D. R. Liu ◽  
S. S. Shinozaki ◽  
R. J. Baird

The epitaxially grown (GaAs)Ge thin film has been arousing much interest because it is one of metastable alloys of III-V compound semiconductors with germanium and a possible candidate in optoelectronic applications. It is important to be able to accurately determine the composition of the film, particularly whether or not the GaAs component is in stoichiometry, but x-ray energy dispersive analysis (EDS) cannot meet this need. The thickness of the film is usually about 0.5-1.5 μm. If Kα peaks are used for quantification, the accelerating voltage must be more than 10 kV in order for these peaks to be excited. Under this voltage, the generation depth of x-ray photons approaches 1 μm, as evidenced by a Monte Carlo simulation and actual x-ray intensity measurement as discussed below. If a lower voltage is used to reduce the generation depth, their L peaks have to be used. But these L peaks actually are merged as one big hump simply because the atomic numbers of these three elements are relatively small and close together, and the EDS energy resolution is limited.


Sign in / Sign up

Export Citation Format

Share Document