bayesian support
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 9)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Vol 135 ◽  
pp. 437-447
Author(s):  
Shijian Xiong ◽  
Yaqi Peng ◽  
Shengyong Lu ◽  
Fanjie Shang ◽  
Xiaodong Li ◽  
...  

2021 ◽  
Vol 10 (1) ◽  
pp. 94
Author(s):  
Ali Abdolahi ◽  
Vali Nowzari ◽  
Ali Pirzad ◽  
Seyed Ehsan Amirhosseini

Introduction: Health companies need investment for development. Due to the high risk of their activities, it is very difficult to attract investment for this field, but this lack of financial resources leads to the failure of these companies, so providing a model for predicting profits and losses in companies is very important and functional.Materials and Method: In this study, a combination of two logistic regression algorithms and differential analysis were used to design a profit and loss forecasting model. Also, the information of 20 companies in the field of health was used to evaluate the proposed model. 10 profitable companies and 10 loss-making companies were selected and for each company, nine variables independent of the financial information of these companies were collected.Results: The designed prediction model was implemented on the data in this study. To do this, the data were divided into two sets: training and testing. The prediction model was implemented on training data and evaluated by test data and reached 99.65% sensitivity, 94.75% specificity and 96.28% accuracy. The proposed model was then compared with the methods of decision tree C4.5, Bayesian, support vector machine, nearest neighborhood and multilayer neural network and it was found to have a better output.Conclusion: In this study, it was found that the risk in the field of health investment can be reduced, so the profit and loss situation of health companies can be predicted with appropriate accuracy. It was also found that the combination of logistic regression and differential analysis algorithms can increase the accuracy of the prediction model.


Author(s):  
Emma L. Morgan ◽  
Mark K. Johansen

AbstractMaking property inferences for category instances is important and has been studied in two largely separate areas—categorical induction and perceptual categorization. Categorical induction has a corpus of well-established effects using complex, real-world categories; however, the representational basis of these effects is unclear. In contrast, the perceptual categorization paradigm has fostered the assessment of well-specified representation models due to its controlled stimuli and categories. In categorical induction, evaluations of premise typicality effects, stronger attribute generalization from typical category instances than from atypical, have tried to control the similarity between instances to be distinct from premise–conclusion similarity effects, stronger generalization from greater similarity. However, the extent to which similarity has been controlled is unclear for these complex stimuli. Our research embedded analogues of categorical induction effects in perceptual categories, notably premise typicality and premise conclusion similarity, in an attempt to clarify the category representation underlying feature inference. These experiments controlled similarity between instances using overlap of a small number of constrained features. Participants made inferences for test cases using displayed sets of category instances. The results showed typicality effects, premise–conclusion similarity effects, but no evidence of premise typicality effects (i.e., no preference for generalizing features from typical over atypical category instances when similarity was controlled for), with significant Bayesian support for the null. As typicality effects occurred and occur widely in the perceptual categorization paradigm, why was premise typicality absent? We discuss possible reasons. For attribute inference, is premise typicality distinct from instance similarity? These initial results suggest not.


2020 ◽  
Vol 5 (2) ◽  
pp. 212
Author(s):  
Hamdi Ahmad Zuhri ◽  
Nur Ulfa Maulidevi

Review ranking is useful to give users a better experience. Review ranking studies commonly use upvote value, which does not represent urgency, and it causes problems in prediction. In contrast, manual labeling as wide as the upvote value range provides a high bias and inconsistency. The proposed solution is to use a classification approach to rank the review where the labels are ordinal urgency class. The experiment involved shallow learning models (Logistic Regression, Naïve Bayesian, Support Vector Machine, and Random Forest), and deep learning models (LSTM and CNN). In constructing a classification model, the problem is broken down into several binary classifications that predict tendencies of urgency depending on the separation of classes. The result shows that deep learning models outperform other models in classification dan ranking evaluation. In addition, the review data used tend to contain vocabulary of certain product domains, so further research is needed on data with more diverse vocabulary.


Sign in / Sign up

Export Citation Format

Share Document