scholarly journals Precision and consistency of astrocombs

2020 ◽  
Vol 493 (3) ◽  
pp. 3997-4011 ◽  
Author(s):  
Dinko Milaković ◽  
Luca Pasquini ◽  
John K Webb ◽  
Gaspare Lo Curto

ABSTRACT Astrocombs are ideal spectrograph calibrators whose limiting precision can be derived using a second, independent, astrocomb system. We therefore analyse data from two astrocombs (one 18 GHz and one 25 GHz) used simultaneously on the HARPS (High Accuracy Radial velocity Planet Searcher) spectrograph at the European Southern Observatory. The first aim of this paper is to quantify the wavelength repeatability achieved by a particular astrocomb. The second aim is to measure wavelength calibration consistency between independent astrocombs, that is to place limits or measure any possible zero-point offsets. We present three main findings, each with important implications for exoplanet detection, varying fundamental constant and redshift drift measurements. First, wavelength calibration procedures are important: using multiple segmented polynomials within one echelle order results in significantly better wavelength calibration compared to using a single higher order polynomial. Segmented polynomials should be used in all applications aimed at precise spectral line position measurements. Secondly, we found that changing astrocombs causes significant zero-point offsets (${\approx}60\, {\rm cm\, s}^{-1}$ in our raw data) which were removed. Thirdly, astrocombs achieve a precision of ${\lesssim }4\, {\rm cm\, s}^{-1}$ in a single exposure (${\approx }10{{\,\rm per\,cent}}$ above the measured photon-limited precision) and 1 cm s−1 when time-averaged over a few hours, confirming previous results. Astrocombs therefore provide the technological requirements necessary for detecting Earth–Sun analogues, measuring variations of fundamental constants and the redshift drift.

2021 ◽  
Vol 21 (10) ◽  
pp. 265
Author(s):  
Jian-Ping Xiong ◽  
Bo Zhang ◽  
Chao Liu ◽  
Jiao Li ◽  
Yong-Heng Zhao ◽  
...  

Abstract The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) started a median-resolution spectroscopic (MRS, R ∼7500) survey since October 2018. The main scientific goals of MRS, including binary stars, pulsators and other variable stars, were launched with a time-domain spectroscopic survey. However, the systematic errors, including the bias induced from wavelength calibration and the systematic difference between different spectrographs, have to be carefully considered during radial velocity measurement. In this work, we provide a technique to correct the systematics in the wavelength calibration based on the relative radial velocity measurements from LAMOST MRS spectra. We show that, for the stars with multi-epoch spectra, the systematic bias which is induced from the exposures on different nights can be corrected well for LAMOST MRS in each spectrograph. In addition, the precision of radial velocity zero-point of multi-epoch time-domain observations reaches below 0.5 km s−1. As a by-product, we also give the constant star candidates**, which can be the secondary radial-velocity standard star candidates of LAMOST MRS time-domain surveys.


1994 ◽  
Vol 38 ◽  
pp. 47-57 ◽  
Author(s):  
D. L. Bish ◽  
Steve. J. Chipera

Abstract Accuracy, or how well a measurement conforms to the true value of a parameter, is important in XRD analyses in three primary areas, 1) 26 position or d-spacing; 2) peak shape; and 3) intensity. Instrumental factors affecting accuracy include zero-point, axial-divergence, and specimen- displacement errors, step size, and even uncertainty in X-ray wavelength values. Sample factors affecting accuracy include specimen transparency, structural strain, crystallite size, and preferred orientation effects. In addition, a variety of other sample-related factors influence the accuracy of quantitative analyses, including variations in sample composition and order/disorder. The conventional method of assessing accuracy during experimental diffractometry measurements is through the use of certified internal standards. However, it is possible to obtain highly accurate d-spacings without an internal standard using a well-aligned powder diffractometer coupled with data analysis routines that allow analysis of and correction for important systematic errors. The first consideration in such measurements is the use of methods yielding precise peak positions, such as profile fitting. High accuracy can be achieved if specimen-displacement, specimen- transparency, axial-divergence, and possibly zero-point corrections are included in data analysis. It is also important to consider that most common X-ray wavelengths (other than Cu Kα1) have not been measured with high accuracy. Accuracy in peak-shape measurements is important in the separation of instrumental and sample contributions to profile shape, e.g., in crystallite size and strain measurements. The instrumental contribution must be determined accurately using a standard material free from significant sample-related effects, such as NIST SRM 660 (LaB6). Although full-pattern fitting methods for quantitative analysis are available, the presence of numerous systematic errors makes the use of an internal standard, such as a-alumina mandatory to ensure accuracy; accuracy is always suspect when using external-standard, constrained-total quantitative analysis methods. One of the most significant problems in quantitative analysis remains the choice of representative standards. Variations in sample chemistry, order-disorder, and preferred orientation can be accommodated only with a thorough understanding of the coupled effects of all three on intensities. It is important to recognize that sample preparation methods that optimize accuracy for one type of measurement may not be appropriate for another. For example, the very fine crystallite size that is optimum for quantitative analysis is unnecessary and can even be detrimental in d-spacing and peak shape measurements.


2012 ◽  
Vol 27 (11) ◽  
pp. 1250041 ◽  
Author(s):  
MU-LIN YAN ◽  
SEN HU ◽  
WEI HUANG ◽  
NENG-CHAO XIAO

The recent OPERA experiment of superluminal neutrinos has deep consequences in cosmology. In cosmology a fundamental constant is the cosmological constant. From observations one can estimate the effective cosmological constant Λ eff which is the sum of the quantum zero point energy Λ dark energy and the geometric cosmological constant Λ. The OPERA experiment can be applied to determine the geometric cosmological constant Λ. It is the first study to distinguish the contributions of Λ and Λ dark energy from each other by experiment. The determination is based on an explanation of the OPERA experiment in the framework of Special Relativity with de Sitter spacetime symmetry.


2020 ◽  
Vol 28 (4) ◽  
pp. 5768
Author(s):  
Yixuan Xu ◽  
Jianxin Li ◽  
Caixun Bai ◽  
Ming Wei ◽  
Jie Liu ◽  
...  

2021 ◽  
Vol 30 (3) ◽  
pp. 2-7
Author(s):  
Myoung-Sun HEO ◽  
Dai-Hyuk YU ◽  
Won-Kyu LEE

Frequencies have been the most accurately measured physical quantity since the second was defined in 1967 based on the microwave atomic transition of a Cs atom. Recently, atomic clocks using optical frequency transitions have shown an order of magnitude better accuracy than microwave clocks. Thanks to their high accuracy and resolution, atomic clocks have become a new tool for investigations involving fundamental science and technology, such as the search for dark matter, gravitational wave detection, the temporal variation of fundamental constants, relativistic geodesy, quantum metrology, and the advanced Global Navigation Satellite System (GNSS). In addition, a redefinition of the second based on the optical frequency is expected. In this paper, we review the principles and applications of optical clocks.


2009 ◽  
Vol 5 (H15) ◽  
pp. 307-307 ◽  
Author(s):  
Claudia G. Scóccola ◽  
Susana J. Landau ◽  
Héctor Vucetich

AbstractWe have studied the role of fundamental constants in an updated recombination scenario. We focus on the time variation of the fine structure constant α, and the electron mass me in the early Universe. In the last years, helium recombination has been studied in great detail revealing the importance of taking new physical processes into account in the calculation of the recombination history. The equations to solve the detailed recombination scenario can be found for example in Wong et al. 2008. In the equation for helium recombination, a term which accounts for the semi-forbidden transition 23p–11s is added. Furthermore, the continuum opacity of HI is taken into account by a modification in the escape probability of the photons that excite helium atoms, with the fitting formulae proposed Kholupenko et al 2007. We have analized the dependences of the quantities involved in the detailed recombination scenario on α and me. We have performed a statistical analysis with COSMOMC to constrain the variation of α and me at the time of neutral hydrogen formation. The observational set used for the analysis was data from the WMAP 5-year temperature and temperature-polarization power spectrum and other CMB experiments such as CBI, ACBAR and BOOMERANG and the power spectrum of the 2dFGRS. Considering the joint variation of α and me we obtain the following bounds: -0.011 < $\frac{&#x0394; &#x03B1;}{&#x03B1;_0}$ < 0.019 and -0.068 < $\frac{&#x0394; m_e}{(m_e)_0$ < 0.030 (68% c.l.). When considering only the variation of one fundamental constant we obtain: -0.010 < $\frac{&#x0394; &#x03B1;}{&#x03B1;_0$ < 0.008 and -0.04 < $\frac{&#x0394; m_e}{(m_e)_0}$ < 0.02 (68% c.l.). We compare these results with the ones presented in Landau et al 2008, which were obtained in the standard recombination scenario and using WMAP 3 year release data. The constraints are tighter in the current analysis, which is an expectable fact since we are working with more accurate data from WMAP. The bounds obtained are consistent with null variation, for both α and me, but in the present analysis, the 68% confidence limits on the variation of both constants have changed. In the case of α, the present limit is more consistent with null variation than the previous one, while in the case of me the single parameters limits have moved toward lower values. To study the origin of this difference, we have performed another statistical analysis, namely the analysis of the standard recombination scenario together with WMAP5 data, the other CMB data sets and the 2dFGRS power spectrum. We see that the change in the obtained results is due to the new WMAP data set, and not to the new recombination scenario. The obtained results for the cosmological parameters are in agreement within 1 σ with the ones obtained by the WMAP collaboration, without considering variation of fundamental constants.


Author(s):  
James E. Faller

Determinations of the Newtonian constant of gravitation (big G ) fit into the oftentimes-unappreciated area of physics called precision measurement—an area which includes precision measurements, null experiments and determinations of the fundamental constants. The determination of big G —a measurement which on the surface appears deceptively simple—continues to be one of Nature's greatest challenges to the skills and cunning of experimental physicists. In spite of the fact that, on the scale of the Universe, big G 's effects are so large as to single-handedly hold everything together, on the scale of an individual research laboratory, big G 's effects are so small that they go unnoticed…hidden in a background of much larger forces and noise sources. It is this ‘smallness’ that makes determining the precise value of this (seemingly unrelated to the rest of physics) fundamental constant so difficult.


2010 ◽  
Vol 2010 ◽  
pp. 1-5 ◽  
Author(s):  
A. Ferrero ◽  
L. Hanlon ◽  
R. Felletti ◽  
J. French ◽  
G. Melady ◽  
...  

The Watcher robotic telescope was developed primarily to perform rapid optical follow-up observations of Gamma-Ray Bursts (GRBs). Secondary scientific goals include blazar monitoring and variable star studies. An automated photometry pipeline to rapidly analyse data from Watcher has been implemented. Details of the procedures to get image zero-point, source instrumental measurement, and limiting magnitude are presented. Sources of uncertainty are assessed and the performance of the pipeline is tested by comparison with a number of catalogue sources.


Author(s):  
Boris Menin

This paper proposes a new framework for calculating the discrepancy of a model and the observed technological process or physical phenomenon. It offers powerful tools for all measurement methods applied in technology, engineering and experimental physics. Since the studies that validate and verificate the models of the phenomenon are still complex, they need to be combined into one total measure. Existing methods used in almost all literature up to the present time implicitly suggest that the use of supercomputers and the latest mathematical statistical methods allows achieving high accuracy very close to the boundaries of Heisenberg principle. To compare methodologies for improving models, we propose a new metric called comparative uncertainty. This allows us to prove that there is a limit to the achievable discrepancy between the model and the object under study.


Sign in / Sign up

Export Citation Format

Share Document