THE MEASUREMENT AND ANALYSIS OF TRIPLE CORRELATIONS

1964 ◽  
Vol 42 (6) ◽  
pp. 1101-1115 ◽  
Author(s):  
Philip B. Smith

The measurement and analysis of the intensity–direction correlation of gamma rays emitted in cascade following heavy-particle capture are treated. A procedure is discussed which is based upon the expansion of the triple-correlation intensity in terms of the set of angular functions orthogonal over the space of the emission (or absorption) directions. This is in contrast to the usual method which expresses the correlation in terms of Legendre polynomials. In the analysis procedure proposed, the population parameters are found directly from the original data, with the gamma-radiation mixing ratios assigned. The least-squares equations representing the best fit to the data contain the population parameters linearly and are solved by a standard computer program which also gives the value of χ2. The true solution is then found by varying the mixing ratios until a minimum in χ2 is reached. In addition to the determination of the population parameters of the decaying state and the mixing ratios of the gamma rays in the cascade, the calculation of the error matrix of these quantities, and the calculation of the formation parameters in simple capture, are described.

1960 ◽  
Vol 38 (7) ◽  
pp. 927-940 ◽  
Author(s):  
A. E. Litherland ◽  
G. J. McCallum

The Mg26(He4, nγ)Si29 reaction has been used to illustrate the simplifications introduced in the interpretation of triple angular correlations by choosing a target and bombarding particles of zero spin and by observing the emitted particles, in this case neutrons, in a counter fixed at 0° to the beam. The angular correlations of the gamma rays with respect to the incident beam then depend only upon the properties of the final states in the residual nucleus. The angular correlation of the electric quadrupole 2.03-Mev gamma ray is predicted uniquely by theory and this prediction has been verified experimentally. The angular correlations of the 1.28-Mev and 2.43-Mev gamma rays have yielded for the E2/M1 amplitude mixing ratios +0.25 ± 0.05 or −3.4 ± 0.5 and −0.26 ± 0.08 or −1.10 ± 0.16 respectively. In addition, the experiment provides an illustration of the value of the recently discovered technique of neutron – gamma-ray discrimination in an organic scintillator.


1977 ◽  
Vol 55 (2) ◽  
pp. 175-179 ◽  
Author(s):  
H. E. Bosch ◽  
V. M. Silbergleit ◽  
M. Davidson ◽  
J. Davidson

An investigation of the gamma–gamma ray angular correlations following the decay of 109Pd was made by using a Ge(Li) semiconductor counter and NaI(Tl) gamma-ray detector. Coincidences measurements at six different angles were made between the 311 keV gamma ray (gated in the movable counter) and 390, 413, 424, 551, and 558 keV gamma rays (displayed in a multichannel analyzer (MCA)). Chance coincidences as well as coincidence background were taken into account. The following spins and mixing ratios were determined: 701 keV level 3/2, δ(390) = 0.19 ± 0.06; 724 keV level 3/2, δ(413) = 0.18 ± 0.05; 735 keV level 5/2, δ(424) = −0.27 ± 0.03; 862 keV level 5/2, δ(551) = −0.28 ± 0.04; 869 keV level 5/2, δ(558) = −0.26 ± 0.05. The result indicates that the anisotropies are consistent with mixing ratios less than 28% in all cases.


Geophysics ◽  
1973 ◽  
Vol 38 (2) ◽  
pp. 310-326 ◽  
Author(s):  
R. J. Wang ◽  
S. Treitel

The normal equations for the discrete Wiener filter are conventionally solved with Levinson’s algorithm. The resultant solutions are exact except for numerical roundoff. In many instances, approximate rather than exact solutions satisfy seismologists’ requirements. The so‐called “gradient” or “steepest descent” iteration techniques can be used to produce approximate filters at computing speeds significantly higher than those achievable with Levinson’s method. Moreover, gradient schemes are well suited for implementation on a digital computer provided with a floating‐point array processor (i.e., a high‐speed peripheral device designed to carry out a specific set of multiply‐and‐add operations). Levinson’s method (1947) cannot be programmed efficiently for such special‐purpose hardware, and this consideration renders the use of gradient schemes even more attractive. It is, of course, advisable to utilize a gradient algorithm which generally provides rapid convergence to the true solution. The “conjugate‐gradient” method of Hestenes (1956) is one of a family of algorithms having this property. Experimental calculations performed with real seismic data indicate that adequate filter approximations are obtainable at a fraction of the computer cost required for use of Levinson’s algorithm.


2015 ◽  
Vol 15 (9) ◽  
pp. 5083-5097 ◽  
Author(s):  
M. D. Shaw ◽  
J. D. Lee ◽  
B. Davison ◽  
A. Vaughan ◽  
R. M. Purvis ◽  
...  

Abstract. Highly spatially resolved mixing ratios of benzene and toluene, nitrogen oxides (NOx) and ozone (O3) were measured in the atmospheric boundary layer above Greater London during the period 24 June to 9 July 2013 using a Dornier 228 aircraft. Toluene and benzene were determined in situ using a proton transfer reaction mass spectrometer (PTR-MS), NOx by dual-channel NOx chemiluminescence and O3 mixing ratios by UV absorption. Average mixing ratios observed over inner London at 360 ± 10 m a.g.l. were 0.20 ± 0.05, 0.28 ± 0.07, 13.2 ± 8.6, 21.0 ± 7.3 and 34.3 ± 15.2 ppbv for benzene, toluene, NO, NO2 and NOx respectively. Linear regression analysis between NO2, benzene and toluene mixing ratios yields a strong covariance, indicating that these compounds predominantly share the same or co-located sources within the city. Average mixing ratios measured at 360 ± 10 m a.g.l. over outer London were always lower than over inner London. Where traffic densities were highest, the toluene / benzene (T / B) concentration ratios were highest (average of 1.8 ± 0.5 ppbv ppbv-1), indicative of strong local sources. Daytime maxima in NOx, benzene and toluene mixing ratios were observed in the morning (~ 40 ppbv NOx, ~ 350 pptv toluene and ~ 200 pptv benzene) and in the mid-afternoon for ozone (~ 40 ppbv O3), all at 360 ± 10 m a.g.l.


Author(s):  
Paula Rangel Pestana Allegro ◽  
Márcia de Almeida Rizzutto ◽  
Nemitala Added ◽  
Vitor Ângelo Paulino de Aguiar ◽  
Dennis Lozano Toufen ◽  
...  

This study presents an alternative method to determine isotope ratios using a medium energy accelerator and simultaneously measuring the charged particles and gamma-rays produced in a nuclear reaction.


Author(s):  
Junxiao Wang ◽  
Shuqing Wang ◽  
Lei Zhang ◽  
Maogen Su ◽  
Duixiong Sun ◽  
...  

Abstract We proposed a theoretical spatio-temporal imaging method, which was based on the thermal model of laser ablation and the two-dimensional axisymmetric multi-species hydrodynamics model. By using the intensity formula, the integral intensity of spectral lines could be calculated and the corresponding images of intensity distribution could be drawn. Through further image processing such as normalization, determination of minimum intensity, combination and color filtering, a relatively clear species distribution image in the plasma could be obtained. Using the above method, we simulated the plasma ablated from Al-Mg alloy by different laser energies under 1 atm argon, and obtained the theoretical spatio-temporal distributions of Mg I, Mg II, Al I, Al II and Ar I species, which are almost consistent with the experimental results by differential imaging. Compared with the experimental decay time constants, the consistency is higher at low laser energy, indicating that our theoretical model is more suitable for the plasma dominated by laser-supported combustion wave.


Author(s):  
Ahmed Fahim ◽  

The k-means is the most well-known algorithm for data clustering in data mining. Its simplicity and speed of convergence to local minima are the most important advantages of it, in addition to its linear time complexity. The most important open problems in this algorithm are the selection of initial centers and the determination of the exact number of clusters in advance. This paper proposes a solution for these two problems together; by adding a preprocess step to get the expected number of clusters in data and better initial centers. There are many researches to solve each of these problems separately, but there is no research to solve both problems together. The preprocess step requires o(n log n); where n is size of the dataset. This preprocess step aims to get initial portioning of data without determining the number of clusters in advance, then computes the means of initial clusters. After that we apply k-means on original data using the resulting information from the preprocess step to get the final clusters. We use many benchmark datasets to test the proposed method. The experimental results show the efficiency of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document