scholarly journals Generalization and search in risky environments

2017 ◽  
Author(s):  
Eric Schulz ◽  
Charley M. Wu ◽  
Quentin J. M. Huys ◽  
Andreas Krause ◽  
Maarten Speekenbrink

AbstractHow do people pursue rewards in risky environments, where some outcomes should be avoided at all costs? We investigate how participant search for spatially correlated rewards in scenarios where one must avoid sampling rewards below a given threshold. This requires not only the balancing of exploration and exploitation, but also reasoning about how to avoid potentially risky areas of the search space. Within risky versions of the spatially correlated multi-armed bandit task, we show that participants’ behavior is aligned well with a Gaussian process function learning algorithm, which chooses points based on a safe optimization routine. Moreover, using leave-one-block-out cross-validation, we find that participants adapt their sampling behavior to the riskiness of the task, although the underlying function learning mechanism remains relatively unchanged. These results show that participants can adapt their search behavior to the adversity of the environment and enrich our understanding of adaptive behavior in the face of risk and uncertainty.

2020 ◽  
Vol 13 (2) ◽  
pp. 126-146
Author(s):  
A.B. Lanchakov ◽  
S.A. Filin ◽  
A.Zh. Yakushev

Subject. The article analyzes the expected effect of a portfolio of projects in the face of risk and uncertainty, when using real options. Objectives. The purpose is to offer a more objective formula to assess the expected impact of a portfolio of projects for real investment objects under risk and uncertainty, using real options, and provide recommendations for improving the portfolio efficiency. Methods. The study draws on methods of real options and evaluation of investment projects through the real option value, the cash flow discounting method, synthesis, and mathematical modeling. Results. We systematized the main types of real options and developed a formula for calculating the expected effect of project portfolio implementation. The said formula shows that considering the additional long-term costs embedded in a portfolio of real options, which are associated with the use of these real options, and, therefore, reducing the overall risk of projects and the entire portfolio, permit to improve the objectivity of such calculations. Conclusions. When analyzing real options that have real assets as underlying instruments, it is often impossible to apply the computational formulae for financial options, as they differ significantly. The systematization of the main types of real options helps expand the range of application of management solutions. The offered formula enables to improve the efficiency of project insurance under risk and uncertainty and to use additional opportunities for effective development of the company.


2020 ◽  
Vol 36 (Supplement_2) ◽  
pp. i831-i839
Author(s):  
Dong-gi Lee ◽  
Myungjun Kim ◽  
Sang Joon Son ◽  
Chang Hyung Hong ◽  
Hyunjung Shin

Abstract Motivation Recently, various approaches for diagnosing and treating dementia have received significant attention, especially in identifying key genes that are crucial for dementia. If the mutations of such key genes could be tracked, it would be possible to predict the time of onset of dementia and significantly aid in developing drugs to treat dementia. However, gene finding involves tremendous cost, time and effort. To alleviate these problems, research on utilizing computational biology to decrease the search space of candidate genes is actively conducted. In this study, we propose a framework in which diseases, genes and single-nucleotide polymorphisms are represented by a layered network, and key genes are predicted by a machine learning algorithm. The algorithm utilizes a network-based semi-supervised learning model that can be applied to layered data structures. Results The proposed method was applied to a dataset extracted from public databases related to diseases and genes with data collected from 186 patients. A portion of key genes obtained using the proposed method was verified in silico through PubMed literature, and the remaining genes were left as possible candidate genes. Availability and implementation The code for the framework will be available at http://www.alphaminers.net/. Supplementary information Supplementary data are available at Bioinformatics online.


2018 ◽  
Vol 7 (2.22) ◽  
pp. 35
Author(s):  
Kavitha M ◽  
Mohamed Mansoor Roomi S ◽  
K Priya ◽  
Bavithra Devi K

The Automatic Teller Machine plays an important role in the modern economic society. ATM centers are located in remote central which are at high risk due to the increasing crime rate and robbery.These ATM centers assist with surveillance techniques to provide protection. Even after installing the surveillance mechanism, the robbers fool the security system by hiding their face using mask/helmet. Henceforth, an automatic mask detection algorithm is required to, alert when the ATM is at risk. In this work, the Gaussian Mixture Model (GMM) is applied for foreground detection to extract the regions of interest (ROI) i.e. Human being. Face region is acquired from the foreground region through  the torso partitioning and applying Viola-Jones algorithm in this search space. Parts of the face such as Eye pair, Nose, and Mouth are extracted and a state model is developed to detect  mask.  


2018 ◽  
Vol 9 (1) ◽  
pp. 60-77 ◽  
Author(s):  
Souhir Sghaier ◽  
Wajdi Farhat ◽  
Chokri Souani

This manuscript presents an improved system research that can detect and recognize the person in 3D space automatically and without the interaction of the people's faces. This system is based not only on a quantum computation and measurements to extract the vector features in the phase of characterization but also on learning algorithm (using SVM) to classify and recognize the person. This research presents an improved technique for automatic 3D face recognition using anthropometric proportions and measurement to detect and extract the area of interest which is unaffected by facial expression. This approach is able to treat incomplete and noisy images and reject the non-facial areas automatically. Moreover, it can deal with the presence of holes in the meshed and textured 3D image. It is also stable against small translation and rotation of the face. All the experimental tests have been done with two 3D face datasets FRAV 3D and GAVAB. Therefore, the test's results of the proposed approach are promising because they showed that it is competitive comparable to similar approaches in terms of accuracy, robustness, and flexibility. It achieves a high recognition performance rate of 95.35% for faces with neutral and non-neutral expressions for the identification and 98.36% for the authentification with GAVAB and 100% with some gallery of FRAV 3D datasets.


2020 ◽  
Author(s):  
Charley M. Wu ◽  
Eric Schulz ◽  
Samuel J Gershman

How do people learn functions on structured spaces? And how do they use this knowledge to guide their search for rewards in situations where the number of options is large? We study human behavior on structures with graph-correlated values and propose a Bayesian model of function learning to describe and predict their behavior. Across two experiments, one assessing function learning and one assessing the search for rewards, we find that our model captures human predictions and sampling behavior better than several alternatives, generates human-like learning curves, and also captures participants’ confidence judgements. Our results extend past models of human function learning and reward learning to more complex, graph-structured domains.


2013 ◽  
Vol 850-851 ◽  
pp. 880-883
Author(s):  
Yong Fang Wang ◽  
Xin Luan ◽  
Da Lei Song ◽  
Li Ping Chen

Considering the problem of invalid data caused mismatch of wavenumber spectrum which contained in turbulence observation data, an algorithm of turbulent wavenumber spectrum matching based on SVM is proposed. Category labels are obtained from pre-processed raw data by cross validation algorithm, and then the optimum parameters of the classifier are got through SVM learning algorithm. Sea trial data validation results indicate that the algorithm has high matching accuracy, and provides a new way to calculate the turbulence wavenumber spectrum matching.


Author(s):  
Vicente González-Prida Díaz ◽  
Jesus Pedro Zamora Bonilla ◽  
Pablo Viveros Gunckel

This chapter aims to consider the effects of the new concept Industry 4.0 on decision making, particularly on the reduction of uncertainty and the risk associated with any choice between alternatives. For this purpose, this chapter begins by dealing with the concepts of risk and uncertainty and their epistemological evolution. After observing certain trends and recent studies in this regard, the authors address a more philosophical perception of risk, mainly on aspects related to engineering and social perception. The concept of human reliability will also be reviewed and how it can be improved with the application of emerging technologies, considering some methodological proposals to improve the decision making. After that, some of the possible future research directions will be briefly discussed. Finally, the chapter concludes by highlighting key aspects of the chapter as a context for other chapters in the book.


1996 ◽  
Vol 8 (7) ◽  
pp. 1391-1420 ◽  
Author(s):  
David H. Wolpert

This is the second of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. The first paper discusses a particular set of ways to compare learning algorithms, according to which there are no distinctions between learning algorithms. This second paper concentrates on different ways of comparing learning algorithms from those used in the first paper. In particular this second paper discusses the associated a priori distinctions that do exist between learning algorithms. In this second paper it is shown, loosely speaking, that for loss functions other than zero-one (e.g., quadratic loss), there are a priori distinctions between algorithms. However, even for such loss functions, it is shown here that any algorithm is equivalent on average to its “randomized” version, and in this still has no first principles justification in terms of average error. Nonetheless, as this paper discusses, it may be that (for example) cross-validation has better head-to-head minimax properties than “anti-cross-validation” (choose the learning algorithm with the largest cross-validation error). This may be true even for zero-one loss, a loss function for which the notion of “randomization” would not be relevant. This paper also analyzes averages over hypotheses rather than targets. Such analyses hold for all possible priors over targets. Accordingly they prove, as a particular example, that cross-validation cannot be justified as a Bayesian procedure. In fact, for a very natural restriction of the class of learning algorithms, one should use anti-cross-validation rather than cross-validation (!).


2019 ◽  
Vol 19 (01) ◽  
pp. 1940009 ◽  
Author(s):  
AHMAD MOHSIN ◽  
OLIVER FAUST

Cardiovascular disease has been the leading cause of death worldwide. Electrocardiogram (ECG)-based heart disease diagnosis is simple, fast, cost effective and non-invasive. However, interpreting ECG waveforms can be taxing for a clinician who has to deal with hundreds of patients during a day. We propose computing machinery to reduce the workload of clinicians and to streamline the clinical work processes. Replacing human labor with machine work can lead to cost savings. Furthermore, it is possible to improve the diagnosis quality by reducing inter- and intra-observer variability. To support that claim, we created a computer program that recognizes normal, Dilated Cardiomyopathy (DCM), Hypertrophic Cardiomyopathy (HCM) or Myocardial Infarction (MI) ECG signals. The computer program combined Discrete Wavelet Transform (DWT) based feature extraction and K-Nearest Neighbor (K-NN) classification for discriminating the signal classes. The system was verified with tenfold cross validation based on labeled data from the PTB diagnostic ECG database. During the validation, we adjusted the number of neighbors [Formula: see text] for the machine learning algorithm. For [Formula: see text], training set has an accuracy and cross validation of 98.33% and 95%, respectively. However, when [Formula: see text], it showed constant for training set but dropped drastically to 80% for cross-validation. Hence, training set [Formula: see text] prevails. Furthermore, a confusion matrix proved that normal data was identified with 96.7% accuracy, 99.6% sensitivity and 99.4% specificity. This means an error of 3.3% will occur. For every 30 normal signals, the classifier will mislabel only 1 of the them as HCM. With these results, we are confident that the proposed system can improve the speed and accuracy with which normal and diseased subjects are identified. Diseased subjects can be treated earlier which improves their probability of survival.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1474
Author(s):  
Peter Korošec ◽  
Tome Eftimov

When making statistical analysis of single-objective optimization algorithms’ performance, researchers usually estimate it according to the obtained optimization results in the form of minimal/maximal values. Though this is a good indicator about the performance of the algorithm, it does not provide any information about the reasons why it happens. One possibility to get additional information about the performance of the algorithms is to study their exploration and exploitation abilities. In this paper, we present an easy-to-use step by step pipeline that can be used for performing exploration and exploitation analysis of single-objective optimization algorithms. The pipeline is based on a web-service-based e-Learning tool called DSCTool, which can be used for making statistical analysis not only with regard to the obtained solution values but also with regard to the distribution of the solutions in the search space. Its usage does not require any special statistic knowledge from the user. The gained knowledge from such analysis can be used to better understand algorithm’s performance when compared to other algorithms or while performing hyperparameter tuning.


Sign in / Sign up

Export Citation Format

Share Document