scholarly journals Extended fast action minimization method: application to SDSS-DR12 combined sample

2021 ◽  
Vol 503 (1) ◽  
pp. 540-556
Author(s):  
E Sarpa ◽  
A Veropalumbo ◽  
C Schimd ◽  
E Branchini ◽  
S Matarrese

ABSTRACT We present the first application of the extended Fast Action Minimization method (eFAM) to a real data set, the SDSS-DR12 Combined Sample, to reconstruct galaxies orbits back-in-time, their two-point correlation function (2PCF) in real-space, and enhance the baryon acoustic oscillation (BAO) peak. For this purpose, we introduce a new implementation of eFAM that accounts for selection effects, survey footprint, and galaxy bias. We use the reconstructed BAO peak to measure the angular diameter distance, $D_\mathrm{A}(z)r^\mathrm{fid}_\mathrm{s}/r_\mathrm{s}$, and the Hubble parameter, $H(z)r_\mathrm{s}/r^\mathrm{fid}_\mathrm{s}$, normalized to the sound horizon scale for a fiducial cosmology $r^\mathrm{fid}_\mathrm{s}$, at the mean redshift of the sample z = 0.38, obtaining $D_\mathrm{A}(z=0.38)r^\mathrm{fid}_\mathrm{s}/r_\mathrm{s}=1090\pm 29$(Mpc)−1, and $H(z=0.38)r_\mathrm{s}/r^\mathrm{fid}_\mathrm{s}=83\pm 3$(km s−1 Mpc−1), in agreement with previous measurements on the same data set. The validation tests, performed using 400 publicly available SDSS-DR12 mock catalogues, reveal that eFAM performs well in reconstructing the 2PCF down to separations of ∼25h−1Mpc, i.e. well into the non-linear regime. Besides, eFAM successfully removes the anisotropies due to redshift-space distortion (RSD) at all redshifts including that of the survey, allowing us to decrease the number of free parameters in the model and fit the full-shape of the back-in-time reconstructed 2PCF well beyond the BAO peak. Recovering the real-space 2PCF, eFAM improves the precision on the estimates of the fitting parameters. When compared with the no-reconstruction case, eFAM reduces the uncertainty of the Alcock-Paczynski distortion parameters α⊥ and α∥ of about 40 per cent and that on the non-linear damping scale Σ∥ of about 70 per cent. These results show that eFAM can be successfully applied to existing redshift galaxy catalogues and should be considered as a reconstruction tool for next-generation surveys alternative to popular methods based on the Zel’dovich approximation.

2020 ◽  
Vol 15 (3) ◽  
pp. 2387-2393
Author(s):  
Mojtaba Ganjali ◽  
Taban Baghfalaki ◽  
Adeniyi Francis Fagbamigbe

Growth curve data consist of repeated measurements of a continuous growth process of human, animal, plant, microbial or bacterial genetic data over time in a population of individuals. A classical approach for analysing such data is the use of non-linear mixed effects models under normality assumption for the responses. But, sometimes the underlying population that the sample is extracted from is an abnormal population or includes some homogeneous sub-samples. So, detection of original properties of the population is an important scientific question of interest. In this paper, a sensitivity analysis of using different parametric and non-parametric distributions for the random effects on the results of applying non-linear mixed models is proposed for emphasizing the possible heterogeneity in the population. A Bayesian MCMC procedure is developed for parameter estimation and inference is performed via a hierarchical Bayesian framework. The methodology is illustrated using a real data set on study of influence of menarche on changes in body fat accretion.


Author(s):  
Zhongxu Zhai ◽  
Chia-Hsun Chuang ◽  
Yun Wang ◽  
Andrew Benson ◽  
Gustavo Yepes

Abstract We present a realistic 2000 deg2 Hα galaxy mock catalog with 1 < z < 2 for the Nancy Grace Roman Space Telescope galaxy redshift survey, the High Latitude Spectroscopic Survey (HLSS), created using Galacticus, a semi-analytical galaxy formation model, and high resolution cosmological N-body simulations. Galaxy clustering can probe dark energy and test gravity via baryon acoustic oscillation (BAO) and redshift space distortion (RSD) measurements. Using our realistic mock as the simulated Roman HLSS data, and a covariance matrix computed using a large set of approximate mocks created using EZmock, we investigate the expected precision and accuracy of the BAO and RSD measurements using the same analysis techniques used in analyzing real data. We find that the Roman Hα galaxy survey alone can measure the angular diameter distance with 2% uncertainty, the Hubble parameter with 3-6% uncertainty, and the linear growth parameter with 7% uncertainty, in each of four redshift bins. Our realistic forecast illustrates the power of the Roman galaxy survey in probing the nature of dark energy and testing gravity.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2020 ◽  
Vol 16 (8) ◽  
pp. 1088-1105
Author(s):  
Nafiseh Vahedi ◽  
Majid Mohammadhosseini ◽  
Mehdi Nekoei

Background: The poly(ADP-ribose) polymerases (PARP) is a nuclear enzyme superfamily present in eukaryotes. Methods: In the present report, some efficient linear and non-linear methods including multiple linear regression (MLR), support vector machine (SVM) and artificial neural networks (ANN) were successfully used to develop and establish quantitative structure-activity relationship (QSAR) models capable of predicting pEC50 values of tetrahydropyridopyridazinone derivatives as effective PARP inhibitors. Principal component analysis (PCA) was used to a rational division of the whole data set and selection of the training and test sets. A genetic algorithm (GA) variable selection method was employed to select the optimal subset of descriptors that have the most significant contributions to the overall inhibitory activity from the large pool of calculated descriptors. Results: The accuracy and predictability of the proposed models were further confirmed using crossvalidation, validation through an external test set and Y-randomization (chance correlations) approaches. Moreover, an exhaustive statistical comparison was performed on the outputs of the proposed models. The results revealed that non-linear modeling approaches, including SVM and ANN could provide much more prediction capabilities. Conclusion: Among the constructed models and in terms of root mean square error of predictions (RMSEP), cross-validation coefficients (Q2 LOO and Q2 LGO), as well as R2 and F-statistical value for the training set, the predictive power of the GA-SVM approach was better. However, compared with MLR and SVM, the statistical parameters for the test set were more proper using the GA-ANN model.


2019 ◽  
Vol 14 (2) ◽  
pp. 148-156
Author(s):  
Nighat Noureen ◽  
Sahar Fazal ◽  
Muhammad Abdul Qadir ◽  
Muhammad Tanvir Afzal

Background: Specific combinations of Histone Modifications (HMs) contributing towards histone code hypothesis lead to various biological functions. HMs combinations have been utilized by various studies to divide the genome into different regions. These study regions have been classified as chromatin states. Mostly Hidden Markov Model (HMM) based techniques have been utilized for this purpose. In case of chromatin studies, data from Next Generation Sequencing (NGS) platforms is being used. Chromatin states based on histone modification combinatorics are annotated by mapping them to functional regions of the genome. The number of states being predicted so far by the HMM tools have been justified biologically till now. Objective: The present study aimed at providing a computational scheme to identify the underlying hidden states in the data under consideration. </P><P> Methods: We proposed a computational scheme HCVS based on hierarchical clustering and visualization strategy in order to achieve the objective of study. Results: We tested our proposed scheme on a real data set of nine cell types comprising of nine chromatin marks. The approach successfully identified the state numbers for various possibilities. The results have been compared with one of the existing models as well which showed quite good correlation. Conclusion: The HCVS model not only helps in deciding the optimal state numbers for a particular data but it also justifies the results biologically thereby correlating the computational and biological aspects.


2021 ◽  
Vol 13 (9) ◽  
pp. 1703
Author(s):  
He Yan ◽  
Chao Chen ◽  
Guodong Jin ◽  
Jindong Zhang ◽  
Xudong Wang ◽  
...  

The traditional method of constant false-alarm rate detection is based on the assumption of an echo statistical model. The target recognition accuracy rate and the high false-alarm rate under the background of sea clutter and other interferences are very low. Therefore, computer vision technology is widely discussed to improve the detection performance. However, the majority of studies have focused on the synthetic aperture radar because of its high resolution. For the defense radar, the detection performance is not satisfactory because of its low resolution. To this end, we herein propose a novel target detection method for the coastal defense radar based on faster region-based convolutional neural network (Faster R-CNN). The main processing steps are as follows: (1) the Faster R-CNN is selected as the sea-surface target detector because of its high target detection accuracy; (2) a modified Faster R-CNN based on the characteristics of sparsity and small target size in the data set is employed; and (3) soft non-maximum suppression is exploited to eliminate the possible overlapped detection boxes. Furthermore, detailed comparative experiments based on a real data set of coastal defense radar are performed. The mean average precision of the proposed method is improved by 10.86% compared with that of the original Faster R-CNN.


Author(s):  
Naonori S Sugiyama ◽  
Shun Saito ◽  
Florian Beutler ◽  
Hee-Jong Seo

Abstract We establish a practical method for the joint analysis of anisotropic galaxy two- and three-point correlation functions (2PCF and 3PCF) on the basis of the decomposition formalism of the 3PCF using tri-polar spherical harmonics. We perform such an analysis with MultiDark Patchy mock catalogues to demonstrate and understand the benefit of the anisotropic 3PCF. We focus on scales above 80 h−1 Mpc, and use information from the shape and the baryon acoustic oscillation (BAO) signals of the 2PCF and 3PCF. We also apply density field reconstruction to increase the signal-noise ratio of BAO in the 2PCF measurement, but not in the 3PCF measurement. In particular, we study in detail the constraints on the angular diameter distance and the Hubble parameter. We build a model of the bispectrum or 3PCF that includes the nonlinear damping of the BAO signal in redshift space. We carefully account for various uncertainties in our analysis including theoretical models of the 3PCF, window function corrections, biases in estimated parameters from the fiducial values, the number of mock realizations to estimate the covariance matrix, and bin size. The joint analysis of the 2PCF and 3PCF monopole and quadrupole components shows a $30\%$ and $20\%$ improvement in Hubble parameter constraints before and after reconstruction of the 2PCF measurements, respectively, compared to the 2PCF analysis alone. This study clearly shows that the anisotropic 3PCF increases cosmological information from galaxy surveys and encourages further development of the modeling of the 3PCF on smaller scales than we consider.


2021 ◽  
Vol 1978 (1) ◽  
pp. 012047
Author(s):  
Xiaona Sheng ◽  
Yuqiu Ma ◽  
Jiabin Zhou ◽  
Jingjing Zhou

2021 ◽  
pp. 1-11
Author(s):  
Velichka Traneva ◽  
Stoyan Tranev

Analysis of variance (ANOVA) is an important method in data analysis, which was developed by Fisher. There are situations when there is impreciseness in data In order to analyze such data, the aim of this paper is to introduce for the first time an intuitionistic fuzzy two-factor ANOVA (2-D IFANOVA) without replication as an extension of the classical ANOVA and the one-way IFANOVA for a case where the data are intuitionistic fuzzy rather than real numbers. The proposed approach employs the apparatus of intuitionistic fuzzy sets (IFSs) and index matrices (IMs). The paper also analyzes a unique set of data on daily ticket sales for a year in a multiplex of Cinema City Bulgaria, part of Cineworld PLC Group, applying the two-factor ANOVA and the proposed 2-D IFANOVA to study the influence of “ season ” and “ ticket price ” factors. A comparative analysis of the results, obtained after the application of ANOVA and 2-D IFANOVA over the real data set, is also presented.


Sign in / Sign up

Export Citation Format

Share Document