scholarly journals The observed velocity distribution of young pulsars – II. Analysis of complete PSRπ

2020 ◽  
Vol 494 (3) ◽  
pp. 3663-3674 ◽  
Author(s):  
Andrei P Igoshev

ABSTRACT Understanding the natal kicks, or birth velocities, of neutron stars is essential for understanding the evolution of massive binaries and double neutron star formation. We use maximum likelihood methods as published in Verbunt et al. to analyse a new large data set of parallaxes and proper motions measured by Deller et al. This sample is roughly three times larger than number of measurements available before. For both the complete sample and its younger part (spin-down ages τ < 3 Myr), we find that a bimodal Maxwellian distribution describes the measured parallaxes and proper motions better than a single Maxwellian with probability of 99.3 and 95.0 per cent, respectively. The bimodal Maxwellian distribution has three parameters: fraction of low-velocity pulsars and distribution parameters σ1 and σ2 for low- and high-velocity modes. For a complete sample, these parameters are as follows: $42_{-15}^{+17}$ per cent, $\sigma _1=128_{-18}^{+22}$ km s−1, and σ2 = 298 ± 28 km s−1. For younger pulsars, which are assumed to represent the natal kick, these parameters are as follows: $20_{-10}^{+11}$ per cent, $\sigma _1=56_{-15}^{+25}$ km s−1, and σ2 = 336 ± 45 km s−1. In the young population, 5 ± 3 per cent of pulsars have velocities less than 60 km s−1. We perform multiple Monte Carlo tests for the method taking into account realistic observational selection. We find that the method reliably estimates all parameters of the natal kick distribution. Results of the velocity analysis are weakly sensitive to the exact values of scale lengths of the Galactic pulsar distribution.

2019 ◽  
Vol 8 (2S11) ◽  
pp. 3523-3526

This paper describes an efficient algorithm for classification in large data set. While many algorithms exist for classification, they are not suitable for larger contents and different data sets. For working with large data sets various ELM algorithms are available in literature. However the existing algorithms using fixed activation function and it may lead deficiency in working with large data. In this paper, we proposed novel ELM comply with sigmoid activation function. The experimental evaluations demonstrate the our ELM-S algorithm is performing better than ELM,SVM and other state of art algorithms on large data sets.


Author(s):  
Brian Hoeschen ◽  
Darcy Bullock ◽  
Mark Schlappi

Historically, stopped delay was used to characterize the operation of intersection movements because it was relatively easy to measure. During the past decade, the traffic engineering community has moved away from using stopped delay and now uses control delay. That measurement is more precise but quite difficult to extract from large data sets if strict definitions are used to derive the data. This paper evaluates two procedures for estimating control delay. The first is based on a historical approximation that control delay is 30% larger than stopped delay. The second is new and based on segment delay. The procedures are applied to a diverse data set collected in Phoenix, Arizona, and compared with control delay calculated by using the formal definition. The new approximation was observed to be better than the historical stopped delay procedure; it provided an accurate prediction of control delay. Because it is an approximation, this methodology would be most appropriately applied to large data sets collected from travel time studies for ranking and prioritizing intersections for further analysis.


Author(s):  
V. Jinubala ◽  
P. Jeyakumar

Data Mining is an emerging research field in the analysis of agricultural data. In fact the most important problem in extracting knowledge from the agriculture data is the missing values of the attributes in the selected data set. If such deficiencies are there in the selected data set then it needs to be cleaned during preprocessing of the data in order to obtain a functional data. The main objective of this paper is to analyse the effectiveness of the various imputation methods in producing a complete data set that can be more useful for applying data mining techniques and presented a comparative analysis of the imputation methods for handling missing values. The pest data set of rice crop collected throughout Maharashtra state under Crop Pest Surveillance and Advisory Project (CROPSAP) during 2009-2013 was used for analysis. The different methodologies like Deleting of rows, Mean & Median, Linear regression and Predictive Mean Matching were analysed for Imputation of Missing values. The comparative analysis shows that Predictive Mean Matching Methodology was better than other methods and effective for imputation of missing values in large data set.


2020 ◽  
Vol 8 (10) ◽  
pp. 743
Author(s):  
Björn Almström ◽  
Magnus Larson

Primary ship waves generated by conventional marine vessels were investigated in the Furusund fairway located in the Stockholm archipelago, Sweden. Continuous water level measurements at two locations in the fairway were analyzed. In total, 466 such events were extracted during two months of measurements. The collected data were used to evaluate 13 existing predictive equations for drawdown height or squat. However, none of the equations were able to satisfactorily predict the drawdown height. Instead, a new equation for drawdown height and period was derived based on simplified descriptions of the main physical processes together with field measurements, employing multiple regression analysis to derive coefficients in the equation. The proposed equation for drawdown height performed better than the existing equations with an R2 value of 0.65, whereas the equation for the drawdown period was R2 = 0.64. The main conclusion from this study is that an empirical equation can satisfactorily predict primary ship waves for a large data set.


Author(s):  
Doris Aschenbrenner ◽  
Nicolas Maltry ◽  
Klaus Schilling ◽  
Jouke Verlinden

This work wants to investigate which visualization method is able to support remote teleanalysis of industrial plants best regarding comprehension, usability and situation awareness. The application goal is the remote optimization of an industrial plant and the examined scenario was generated out of a large data set of a real production entity. The plant consists of an industrial manipulator, a molding machine and a montage system. Prior studies on the same plant with video based visualization explored by remote experts showed a large potential for optimization, but indicated a higher demand for situation awareness. In order to test the influence of the visualization method, a user study has been carried out with 60 student participants with six different visualization methods, including various VR and AR implementations. Overall, our used AR environment performed significantly better than the used VR and video implementations, but the VR implementation surpasses AR regarding situation awareness.


2002 ◽  
Vol 10 (3) ◽  
pp. 244-260 ◽  
Author(s):  
Michael D. Ward ◽  
Kristian Skrede Gleditsch

This article demonstrates how spatially dependent data with a categorical response variable can be addressed in a statistical model. We introduce the idea of an autologistic model where the response for one observation is dependent on the value of the response among adjacent observations. The autologistic model has likelihood function that is mathematically intractable, since the observations are conditionally dependent upon one another. We review alternative techniques for estimating this model, with special emphasis on recent advances using Markov chain Monte Carlo (MCMC) techniques. We evaluate a highly simplified autologistic model of conflict where the likelihood of war involvement for each nation is conditional on the war involvement of proximate states. We estimate this autologistic model for a single year (1988) via maximum pseudolikelihood and MCMC maximum likelihood methods. Our results indicate that the autologistic model fits the data much better than an unconditional model and that the MCMC estimates generally dominate the pseudolikelihood estimates. The autologistic model generates predicted probabilities greater than 0.5 and has relatively good predictive abilities in an out-of-sample forecast for the subsequent decade (1989 to 1998), correctly identifying not only ongoing conflicts, but also new ones.


1999 ◽  
Vol 55 (2) ◽  
pp. 464-468 ◽  
Author(s):  
Zhi Chen ◽  
Eric Blanc ◽  
Michael S. Chapman

Real-space targets and molecular-dynamics search protocols have been combined to improve the convergence of macromolecular atomic refinement. This was accomplished by providing a local real-space target function for the molecular-dynamics program X-PLOR. With poor isomorphous replacement experimental phases, molecular dynamics does not improve real-space refinement. However, with high-quality anomalous diffraction phases convergence is improved at the start of refinement, and torsion-angle real-space molecular dynamics performs better than other available least-squares or maximum-likelihood methods in real or reciprocal space. It is shown that the improvements result from an optimization method that can escape local minima and from a reduction of overfitting through the implicit use of phases and through use of a local refinement in which errors in remote parts of the structure cannot be mutually compensating.


2021 ◽  
Vol 4 (3) ◽  
pp. 47-63
Author(s):  
Owhondah P.S. ◽  
Enegesele D. ◽  
Biu O.E. ◽  
Wokoma D.S.A.

The study deals with discriminating between the second-order models with/without interaction on central tendency estimation using the ordinary least square (OLS) method for the estimation of the model parameters. The paper considered two different sets of data (small and large) sample size. The small sample size used data of unemployment rate as a response, inflation rate and exchange rate as the predictors from 2007 to 2018 and the large sample size was data of flow-rate on hydrate formation for Niger Delta deep offshore field. The〖 R〗^2, AIC, SBC, and SSE were computed for both data sets to test for adequacy of the models. The results show that all three models are similar for smaller data set while for large data set the second-order model centered on the median with/without interaction is the best base on the number of significant parameters. The model’s selection criterion values (R^2, AIC, SBC, and SSE) were found to be equal for models centered on median and mode for both large and small data sets. However, the model centered on median and mode with/without interaction were better than the model centered on the mean for large data sets. This study shows that the second-order regression model centered on median and mode are better than the model centered on the mean for large data set, while they are similar for smaller data set. Hence, the second-order regression model centered on median and mode with or without interaction are better than the second-order regression model centered on the mean.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e2684 ◽  
Author(s):  
Ruijing Gan ◽  
Ni Chen ◽  
Daizheng Huang

This study compares and evaluates the prediction of hepatitis in Guangxi Province, China by using back propagation neural networks based genetic algorithm (BPNN-GA), generalized regression neural networks (GRNN), and wavelet neural networks (WNN). In order to compare the results of forecasting, the data obtained from 2004 to 2013 and 2014 were used as modeling and forecasting samples, respectively. The results show that when the small data set of hepatitis has seasonal fluctuation, the prediction result by BPNN-GA will be better than the two other methods. The WNN method is suitable for predicting the large data set of hepatitis that has seasonal fluctuation and the same for the GRNN method when the data increases steadily.


Sign in / Sign up

Export Citation Format

Share Document