scholarly journals Bayesian inference for radio observations

2015 ◽  
Vol 450 (2) ◽  
pp. 1308-1319 ◽  
Author(s):  
Michelle Lochner ◽  
Iniyan Natarajan ◽  
Jonathan T. L. Zwart ◽  
Oleg Smirnov ◽  
Bruce A. Bassett ◽  
...  
2015 ◽  
Vol 12 ◽  
pp. 73-85 ◽  
Author(s):  
S.J. Perkins ◽  
P.C. Marais ◽  
J.T.L. Zwart ◽  
I. Natarajan ◽  
C. Tasse ◽  
...  

2014 ◽  
Vol 10 (S306) ◽  
pp. 185-188
Author(s):  
Michelle Lochner ◽  
Bruce Bassett ◽  
Martin Kunz ◽  
Iniyan Natarajan ◽  
Nadeem Oozeer ◽  
...  

AbstractRadio interferometers suffer from the problem of missing information in their data, due to the gaps between the antennae. This results in artifacts, such as bright rings around sources, in the images obtained. Multiple deconvolution algorithms have been proposed to solve this problem and produce cleaner radio images. However, these algorithms are unable to correctly estimate uncertainties in derived scientific parameters or to always include the effects of instrumental errors. We propose an alternative technique called Bayesian Inference for Radio Observations (BIRO) which uses a Bayesian statistical framework to determine the scientific parameters and instrumental errors simultaneously directly from the raw data, without making an image. We use a simple simulation of Westerbork Synthesis Radio Telescope data including pointing errors and beam parameters as instrumental effects, to demonstrate the use of BIRO.


Author(s):  
Poonam Chandra ◽  
A. J. Nayana ◽  
Claes-Ingvar Bjornsson ◽  
Peter Lundqvist ◽  
Alak K. Ray

2015 ◽  
Author(s):  
Qing Dou ◽  
Ashish Vaswani ◽  
Kevin Knight ◽  
Chris Dyer

2018 ◽  
Author(s):  
Olmo Van den Akker ◽  
Linda Dominguez Alvarez ◽  
Marjan Bakker ◽  
Jelte M. Wicherts ◽  
Marcel A. L. M. van Assen

We studied how academics assess the results of a set of four experiments that all test a given theory. We found that participants’ belief in the theory increases with the number of significant results, and that direct replications were considered to be more important than conceptual replications. We found no difference between authors and reviewers in their propensity to submit or recommend to publish sets of results, but we did find that authors are generally more likely to desire an additional experiment. In a preregistered secondary analysis of individual participant data, we examined the heuristics academics use to assess the results of four experiments. Only 6 out of 312 (1.9%) participants we analyzed used the normative method of Bayesian inference, whereas the majority of participants used vote counting approaches that tend to undervalue the evidence for the underlying theory if two or more results are statistically significant.


Sign in / Sign up

Export Citation Format

Share Document