scholarly journals Learning in visual regions as support for the bias in future value-driven choice

2019 ◽  
Author(s):  
Sara Jahfari ◽  
Jan Theeuwes ◽  
Tomas Knapen

AbstractReinforcement learning can bias decision-making towards the option with the highest expected outcome. Cognitive learning theories associate this bias with the constant tracking of stimulus values and the evaluation of choice outcomes in the striatum and prefrontal cortex. Decisions however first require processing of sensory input, and to-date, we know far less about the interplay between learning and perception. This fMRI study (N=43), relates visual BOLD responses to value-beliefs during choice, and, signed prediction errors after outcomes. To understand these relationships, which co-occurred in the striatum, we sought relevance by evaluating the prediction of future value-based decisions in a separate transfer phase where learning was already established. We decoded choice outcomes with a 70% accuracy with a supervised machine learning algorithm that was given trial-by-trial BOLD from visual regions alongside more traditional motor, prefrontal, and striatal regions. Importantly, this decoding of future value-driven choice outcomes again highligted an important role for visual activity. These results raise the intriguing possibility that the tracking of value in visual cortex is supportive for the striatal bias towards the more valued option in future choice.

2019 ◽  
Vol 30 (4) ◽  
pp. 2005-2018 ◽  
Author(s):  
Sara Jahfari ◽  
Jan Theeuwes ◽  
Tomas Knapen

Abstract Reinforcement learning can bias decision-making toward the option with the highest expected outcome. Cognitive learning theories associate this bias with the constant tracking of stimulus values and the evaluation of choice outcomes in the striatum and prefrontal cortex. Decisions however first require processing of sensory input, and to date, we know far less about the interplay between learning and perception. This functional magnetic resonance imaging study (N = 43) relates visual blood oxygen level–dependent (BOLD) responses to value beliefs during choice and signed prediction errors after outcomes. To understand these relationships, which co-occurred in the striatum, we sought relevance by evaluating the prediction of future value-based decisions in a separate transfer phase where learning was already established. We decoded choice outcomes with a 70% accuracy with a supervised machine learning algorithm that was given trial-by-trial BOLD from visual regions alongside more traditional motor, prefrontal, and striatal regions. Importantly, this decoding of future value-driven choice outcomes again highlighted an important role for visual activity. These results raise the intriguing possibility that the tracking of value in visual cortex is supportive for the striatal bias toward the more valued option in future choice.


2005 ◽  
Vol 17 (6) ◽  
pp. 918-927 ◽  
Author(s):  
Michael Rose ◽  
Hilde Haider ◽  
Christian Büchel

The detection of unexpected events is a fundamental process of learning. Theories of cognitive control and previous imaging results indicate a prominent role of the prefrontal cortex in the evaluation of the congruency between expected and actual outcome. In most cases, this attributed function is based on results where the person is consciously aware of the discrepancy. In this functional magnetic imaging (fMRI) study, we examined violations of predicted outcomes that did not enter conscious awareness. Two groups were trained with nearly identical material and the effects of new stimuli were assessed after learning. For the first group, the material was arranged with a hidden regularity. In this incidental learning situation, volunteers acquired implicit knowledge about structural response regularities as was demonstrated by an increase in reaction time when introducing new stimuli that violated the learned relations. To differentiate the detection process of stimuli that deviate from learned expectations from more un-specific effects generated by novel, unfamiliar stimuli, the second group was trained with rearranged material without a hidden regularity. No behavioral effects were found for the introduction of new stimuli in the group without implicit learning. Comparing the two groups, specific fMRI effects concerning the violation of implicitly learned expectations were found in the ventral prefrontal cortex and in the medial-temporal lobe. In accord with theories of learning, the results show a direct influence of the detection of prediction errors on the neuronal activity in learning related structures even in the absence of conscious knowledge about the predictions or their violations.


2019 ◽  
Vol 23 (1) ◽  
pp. 12-21 ◽  
Author(s):  
Shikha N. Khera ◽  
Divya

Information technology (IT) industry in India has been facing a systemic issue of high attrition in the past few years, resulting in monetary and knowledge-based loses to the companies. The aim of this research is to develop a model to predict employee attrition and provide the organizations opportunities to address any issue and improve retention. Predictive model was developed based on supervised machine learning algorithm, support vector machine (SVM). Archival employee data (consisting of 22 input features) were collected from Human Resource databases of three IT companies in India, including their employment status (response variable) at the time of collection. Accuracy results from the confusion matrix for the SVM model showed that the model has an accuracy of 85 per cent. Also, results show that the model performs better in predicting who will leave the firm as compared to predicting who will not leave the company.


2020 ◽  
Vol 36 (6) ◽  
pp. 439-442
Author(s):  
Alissa Jell ◽  
Christina Kuttler ◽  
Daniel Ostler ◽  
Norbert Hüser

<b><i>Introduction:</i></b> Esophageal motility disorders have a severe impact on patients’ quality of life. While high-resolution manometry (HRM) is the gold standard in the diagnosis of esophageal motility disorders, intermittently occurring muscular deficiencies often remain undiscovered if they do not lead to an intense level of discomfort or cause suffering in patients. Ambulatory long-term HRM allows us to study the circadian (dys)function of the esophagus in a unique way. With the prolonged examination period of 24 h, however, there is an immense increase in data which requires personnel and time for evaluation not available in clinical routine. Artificial intelligence (AI) might contribute here by performing an autonomous analysis. <b><i>Methods:</i></b> On the basis of 40 previously performed and manually tagged long-term HRM in patients with suspected temporary esophageal motility disorders, we implemented a supervised machine learning algorithm for automated swallow detection and classification. <b><i>Results:</i></b> For a set of 24 h of long-term HRM by means of this algorithm, the evaluation time could be reduced from 3 days to a core evaluation time of 11 min for automated swallow detection and clustering plus an additional 10–20 min of evaluation time, depending on the complexity and diversity of motility disorders in the examined patient. In 12.5% of patients with suggested esophageal motility disorders, AI-enabled long-term HRM was able to reveal new and relevant findings for subsequent therapy. <b><i>Conclusion:</i></b> This new approach paves the way to the clinical use of long-term HRM in patients with temporary esophageal motility disorders and might serve as an ideal and clinically relevant application of AI.


Friction ◽  
2021 ◽  
Author(s):  
Vigneashwara Pandiyan ◽  
Josef Prost ◽  
Georg Vorlaufer ◽  
Markus Varga ◽  
Kilian Wasmer

AbstractFunctional surfaces in relative contact and motion are prone to wear and tear, resulting in loss of efficiency and performance of the workpieces/machines. Wear occurs in the form of adhesion, abrasion, scuffing, galling, and scoring between contacts. However, the rate of the wear phenomenon depends primarily on the physical properties and the surrounding environment. Monitoring the integrity of surfaces by offline inspections leads to significant wasted machine time. A potential alternate option to offline inspection currently practiced in industries is the analysis of sensors signatures capable of capturing the wear state and correlating it with the wear phenomenon, followed by in situ classification using a state-of-the-art machine learning (ML) algorithm. Though this technique is better than offline inspection, it possesses inherent disadvantages for training the ML models. Ideally, supervised training of ML models requires the datasets considered for the classification to be of equal weightage to avoid biasing. The collection of such a dataset is very cumbersome and expensive in practice, as in real industrial applications, the malfunction period is minimal compared to normal operation. Furthermore, classification models would not classify new wear phenomena from the normal regime if they are unfamiliar. As a promising alternative, in this work, we propose a methodology able to differentiate the abnormal regimes, i.e., wear phenomenon regimes, from the normal regime. This is carried out by familiarizing the ML algorithms only with the distribution of the acoustic emission (AE) signals captured using a microphone related to the normal regime. As a result, the ML algorithms would be able to detect whether some overlaps exist with the learnt distributions when a new, unseen signal arrives. To achieve this goal, a generative convolutional neural network (CNN) architecture based on variational auto encoder (VAE) is built and trained. During the validation procedure of the proposed CNN architectures, we were capable of identifying acoustics signals corresponding to the normal and abnormal wear regime with an accuracy of 97% and 80%. Hence, our approach shows very promising results for in situ and real-time condition monitoring or even wear prediction in tribological applications.


Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 527
Author(s):  
Eran Elhaik ◽  
Dan Graur

In the last 15 years or so, soft selective sweep mechanisms have been catapulted from a curiosity of little evolutionary importance to a ubiquitous mechanism claimed to explain most adaptive evolution and, in some cases, most evolution. This transformation was aided by a series of articles by Daniel Schrider and Andrew Kern. Within this series, a paper entitled “Soft sweeps are the dominant mode of adaptation in the human genome” (Schrider and Kern, Mol. Biol. Evolut. 2017, 34(8), 1863–1877) attracted a great deal of attention, in particular in conjunction with another paper (Kern and Hahn, Mol. Biol. Evolut. 2018, 35(6), 1366–1371), for purporting to discredit the Neutral Theory of Molecular Evolution (Kimura 1968). Here, we address an alleged novelty in Schrider and Kern’s paper, i.e., the claim that their study involved an artificial intelligence technique called supervised machine learning (SML). SML is predicated upon the existence of a training dataset in which the correspondence between the input and output is known empirically to be true. Curiously, Schrider and Kern did not possess a training dataset of genomic segments known a priori to have evolved either neutrally or through soft or hard selective sweeps. Thus, their claim of using SML is thoroughly and utterly misleading. In the absence of legitimate training datasets, Schrider and Kern used: (1) simulations that employ many manipulatable variables and (2) a system of data cherry-picking rivaling the worst excesses in the literature. These two factors, in addition to the lack of negative controls and the irreproducibility of their results due to incomplete methodological detail, lead us to conclude that all evolutionary inferences derived from so-called SML algorithms (e.g., S/HIC) should be taken with a huge shovel of salt.


Hypertension ◽  
2021 ◽  
Vol 78 (5) ◽  
pp. 1595-1604
Author(s):  
Fabrizio Buffolo ◽  
Jacopo Burrello ◽  
Alessio Burrello ◽  
Daniel Heinrich ◽  
Christian Adolf ◽  
...  

Primary aldosteronism (PA) is the cause of arterial hypertension in 4% to 6% of patients, and 30% of patients with PA are affected by unilateral and surgically curable forms. Current guidelines recommend screening for PA ≈50% of patients with hypertension on the basis of individual factors, while some experts suggest screening all patients with hypertension. To define the risk of PA and tailor the diagnostic workup to the individual risk of each patient, we developed a conventional scoring system and supervised machine learning algorithms using a retrospective cohort of 4059 patients with hypertension. On the basis of 6 widely available parameters, we developed a numerical score and 308 machine learning-based models, selecting the one with the highest diagnostic performance. After validation, we obtained high predictive performance with our score (optimized sensitivity of 90.7% for PA and 92.3% for unilateral PA [UPA]). The machine learning-based model provided the highest performance, with an area under the curve of 0.834 for PA and 0.905 for diagnosis of UPA, with optimized sensitivity of 96.6% for PA, and 100.0% for UPA, at validation. The application of the predicting tools allowed the identification of a subgroup of patients with very low risk of PA (0.6% for both models) and null probability of having UPA. In conclusion, this score and the machine learning algorithm can accurately predict the individual pretest probability of PA in patients with hypertension and circumvent screening in up to 32.7% of patients using a machine learning-based model, without omitting patients with surgically curable UPA.


2018 ◽  
Vol 7 (3) ◽  
pp. 1372
Author(s):  
Soudamini Hota ◽  
Sudhir Pathak

‘Sentiment’ literally means ‘Emotions’. Sentiment analysis, synonymous to opinion mining, is a type of data mining that refers to the analy-sis of data obtained from microblogging sites, social media updates, online news reports, user reviews etc., in order to study the sentiments of the people towards an event, organization, product, brand, person etc. In this work, sentiment classification is done into multiple classes. The proposed methodology based on KNN classification algorithm shows an improvement over one of the existing methodologies which is based on SVM classification algorithm. The data used for analysis has been taken from Twitter, this being the most popular microblogging site. The source data has been extracted from Twitter using Python’s Tweepy. N-Gram modeling technique has been used for feature extraction and the supervised machine learning algorithm k-nearest neighbor has been used for sentiment classification. The performance of proposed and existing techniques is compared in terms of accuracy, precision and recall. It is analyzed and concluded that the proposed technique performs better in terms of all the standard evaluation parameters. 


Sign in / Sign up

Export Citation Format

Share Document