scholarly journals Expectation in Melody: The Influence of Context and Learning

2006 ◽  
Vol 23 (5) ◽  
pp. 377-405 ◽  
Author(s):  
Marcus T. Pearce ◽  
Geraint A. Wiggins

The Implication-Realization (IR) theory (Narmour, 1990) posits two cognitive systems involved in the generation of melodic expectations: The first consists of a limited number of symbolic rules that are held to be innate and universal; the second reflects the top-down influences of acquired stylistic knowledge. Aspects of both systems have been implemented as quantitative models in research which has yielded empirical support for both components of the theory (Cuddy & Lunny, 1995; Krumhansl, 1995a, 1995b; Schellenberg, 1996, 1997). However, there is also evidence that the implemented bottom-up rules constitute too inflexible a model to account for the influence of the musical experience of the listener and the melodic context in which expectations are elicited. A theory is presented, according to which both bottom-up and top-down descriptions of observed patterns of melodic expectation may be accounted for in terms of the induction of statistical regularities in existing musical repertoires. A computational model that embodies this theory is developed and used to reanalyze existing experimental data on melodic expectancy. The results of three experiments with increasingly complex melodic stimuli demonstrate that this model is capable of accounting for listeners’ expectations as well as or better than the two-factor model of Schellenberg (1997).

2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2019 ◽  
Author(s):  
Yuru Song ◽  
Mingchen Yao ◽  
Helen Kemprecos ◽  
Áine Byrne ◽  
Zhengdong Xiao ◽  
...  

AbstractPain is a complex, multidimensional experience that involves dynamic interactions between sensory-discriminative and affective-emotional processes. Pain experiences have a high degree of variability depending on their context and prior anticipation. Viewing pain perception as a perceptual inference problem, we use a predictive coding paradigm to characterize both evoked and spontaneous pain. We record the local field potentials (LFPs) from the primary somatosensory cortex (S1) and the anterior cingulate cortex (ACC) of freely behaving rats—two regions known to encode the sensory-discriminative and affective-emotional aspects of pain, respectively. We further propose a framework of predictive coding to investigate the temporal coordination of oscillatory activity between the S1 and ACC. Specifically, we develop a high-level, empirical and phenomenological model to describe the macroscopic dynamics of bottom-up and top-down activity. Supported by recent experimental data, we also develop a mechanistic mean-field model to describe the mesoscopic population neuronal dynamics in the S1 and ACC populations, in both naive and chronic pain-treated animals. Our proposed predictive coding models not only replicate important experimental findings, but also provide new mechanistic insight into the uncertainty of expectation, placebo or nocebo effect, and chronic pain.Author SummaryPain perception in the mammalian brain is encoded through multiple brain circuits. The experience of pain is often associated with brain rhythms or neuronal oscillations at different frequencies. Understanding the temporal coordination of neural oscillatory activity from different brain regions is important for dissecting pain circuit mechanisms and revealing differences between distinct pain conditions. Predictive coding is a general computational framework to understand perceptual inference by integrating bottom-up sensory information and top-down expectation. Supported by experimental data, we propose a predictive coding framework for pain perception, and develop empirical and biologically-constrained computational models to characterize oscillatory dynamics of neuronal populations from two cortical circuits—one for the sensory-discriminative experience and the other for affective-emotional experience, and further characterize their temporal coordination under various pain conditions. Our computational study of biologically-constrained neuronal population model reveals important mechanistic insight on pain perception, placebo analgesia, and chronic pain.


2016 ◽  
Vol 39 ◽  
Author(s):  
Vladimir Miskovic ◽  
Karl Kuntzelman ◽  
Junichi Chikazoe ◽  
Adam K. Anderson

AbstractContemporary neuroscience suggests that perception is perhaps best understood as a dynamically iterative process that does not honor cleanly segregated “bottom-up” or “top-down” streams. We argue that there is substantial empirical support for the idea that affective influences infiltrate the earliest reaches of sensory processing and even that primitive internal affective dimensions (e.g., goodness-to-badness) are represented alongside physical dimensions of the external world.


2015 ◽  
Vol 28 (1) ◽  
pp. 237-240
Author(s):  
Gary Chartier
Keyword(s):  
Top Down ◽  

Peter T. Leeson's Anarchy Unbound offers an interesting collection of historical and theoretical arguments for the view that bottom-up social order is perfectly possible and at least sometimes preferable to order imposed from the top down.


2017 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter. Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test to which extent expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the Inverse Distance Weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down setup relying on parameter and process constraints, and an experimentalists' setup based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed. The simulation results showed that: (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up setup performed better than the top-down one when simulating short-duration events, but similarly to the top-down setup when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up setup can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down setup seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of "model realism" differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


Author(s):  
Edyta Sasin ◽  
Daryl Fougnie

AbstractDoes the strength of representations in long-term memory (LTM) depend on which type of attention is engaged? We tested participants’ memory for objects seen during visual search. We compared implicit memory for two types of objects—related-context nontargets that grabbed attention because they matched the target defining feature (i.e., color; top-down attention) and salient distractors that captured attention only because they were perceptually distracting (bottom-up attention). In Experiment 1, the salient distractor flickered, while in Experiment 2, the luminance of the salient distractor was alternated. Critically, salient and related-context nontargets produced equivalent attentional capture, yet related-context nontargets were remembered far better than salient distractors (and salient distractors were not remembered better than unrelated distractors). These results suggest that LTM depends not only on the amount of attention but also on the type of attention. Specifically, top-down attention is more effective in promoting the formation of memory traces than bottom-up attention.


2019 ◽  
Author(s):  
Christopher A. Brown ◽  
Ingrid Scholtes ◽  
Nicholas Shenker ◽  
Michael C. Lee

ABSTRACTIn Complex Regional Pain Syndrome (CRPS), tactile sensory deficits have motivated the therapeutic use of sensory discrimination training. However, the hierarchical organisation of the brain is such that low-level sensory processing can be dynamically influenced by higher-level knowledge, e.g. knowledge learnt from statistical regularities in the environment. It is unknown whether the learning of such statistical regularities is impaired in CRPS. Here, we employed a hierarchical Bayesian model of predictive coding to investigate statistical learning of tactile-spatial predictions in CRPS. Using a sensory change-detection task, we manipulated bottom-up (spatial displacement of a tactile stimulus) and top-down (probabilistic structure of occurrence) factors to estimate hierarchies of prediction and prediction error signals, as well as their respective precisions or reliability. Behavioural responses to spatial changes were influenced by both the magnitude of spatial displacement (bottom-up) and learnt probabilities of change (top-down). The Bayesian model revealed that patients’ predictions (of spatial displacements) were found to be less precise, deviating further from the ideal (statistical optimality) compared to healthy controls. This imprecision was less context-dependent, i.e. more enduring across changes in probabilistic context and less finely-tuned to statistics of the environment. This caused greater precision on prediction errors, resulting in predictions that were driven more by momentary spatial changes and less by the history of spatial changes. These results suggest inefficiencies in higher-order statistical learning in CRPS. This may have implications for therapies based on sensory re-training whose effects may be more short-lived if success depends on higher-order learning.


Author(s):  
Leonard Simms ◽  
Trevor F. Williams ◽  
Ericka Nus Simms

We review the current state of the science with respect to the assessment of the Five Factor Model (FFM), a robust structural model of personality that emerged from two distinct traditions: The lexical and questionnaire traditions. The lexical tradition is predicated on the hypothesis that important individual differences in personality are encoded as single words in language. This bottom-up tradition has suggested that five broad factors account for much of the personality variation observed among individuals: Extraversion (or Surgency), Agreeableness, Conscientiousness (or Dependability), Neuroticism (vs. Emotional Stability), and Openness to Experience (or Intellect/Culture). The questionnaire tradition emphasizes the measurement of similar constructs, largely through top-down development of measures. We examine the strengths and limitations associated with existing measures of the FFM and related models, focusing on measures rooted in the lexical and questionnaire traditions. We also consider maladaptive FFM measures and conclude by analyzing important issues in the FFM assessment literature.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Stephan Thaler ◽  
Julija Zavadlav

AbstractIn molecular dynamics (MD), neural network (NN) potentials trained bottom-up on quantum mechanical data have seen tremendous success recently. Top-down approaches that learn NN potentials directly from experimental data have received less attention, typically facing numerical and computational challenges when backpropagating through MD simulations. We present the Differentiable Trajectory Reweighting (DiffTRe) method, which bypasses differentiation through the MD simulation for time-independent observables. Leveraging thermodynamic perturbation theory, we avoid exploding gradients and achieve around 2 orders of magnitude speed-up in gradient computation for top-down learning. We show effectiveness of DiffTRe in learning NN potentials for an atomistic model of diamond and a coarse-grained model of water based on diverse experimental observables including thermodynamic, structural and mechanical properties. Importantly, DiffTRe also generalizes bottom-up structural coarse-graining methods such as iterative Boltzmann inversion to arbitrary potentials. The presented method constitutes an important milestone towards enriching NN potentials with experimental data, particularly when accurate bottom-up data is unavailable.


Sign in / Sign up

Export Citation Format

Share Document