scholarly journals The neural representation of dynamic real-world auditory/visual events

2010 ◽  
Vol 8 (6) ◽  
pp. 1054-1054
Author(s):  
J. Vettel ◽  
J. Green ◽  
L. Heller ◽  
M. Tarr
2015 ◽  
Vol 15 (12) ◽  
pp. 8
Author(s):  
Marius Catalin Iordan ◽  
Michelle Greene ◽  
Diane Beck ◽  
Li Fei-Fei

Author(s):  
Roman Bresson ◽  
Johanne Cohen ◽  
Eyke Hüllermeier ◽  
Christophe Labreuche ◽  
Michèle Sebag

Multi-Criteria Decision Making (MCDM) aims at modelling expert preferences and assisting decision makers in identifying options best accommodating expert criteria. An instance of MCDM model, the Choquet integral is widely used in real-world applications, due to its ability to capture interactions between criteria while retaining interpretability. Aimed at a better scalability and modularity, hierarchical Choquet integrals involve intermediate aggregations of the interacting criteria, at the cost of a more complex elicitation. The paper presents a machine learning-based approach for the automatic identification of hierarchical MCDM models, composed of 2-additive Choquet integral aggregators and of marginal utility functions on the raw features from data reflecting expert preferences. The proposed NEUR-HCI framework relies on a specific neural architecture, enforcing by design the Choquet model constraints and supporting its end-to-end training. The empirical validation of NEUR-HCI on real-world and artificial benchmarks demonstrates the merits of the approach compared to state-of-art baselines.


2012 ◽  
Author(s):  
Troy A. Smith ◽  
William A. Cunningham ◽  
Simon Dennis ◽  
Per B. Sederberg

2020 ◽  
Author(s):  
Michele Fornaciai ◽  
Massimiliano Di Luca

Causality poses clear constraints to the timing of sensory signals produced by events, as sound travels slower than light, causing auditory stimulation to lag visual stimulation. Previous studies show that implied causality between unrelated events can change the tolerance of simultaneity judgements for audio-visual asynchronies. Here, we tested whether apparent causality between audio-visual events may also affect their perceived temporal order. To this aim, we used a disambiguated stream-bounce display, with stimuli either bouncing or streaming upon each other. These two possibilities were accompanied by a sound played around the time of contact between the objects, which could be perceived as causally related to the visual event according to the condition. Participants reported whether the visual contact occurred before or after the sound. Our results show that when the audio-visual stimuli are consistent with a causal interpretation (i.e., the bounce caused the sound), their perceived temporal order is systematically biased. Namely, a stimulus dynamics consistent with a causal relation induces a perceptual delay in the audio component, even if the sound was actually presented first. We thus conclude that causality can systematically bias the perceived temporal order of events, possibly due to expectations based on the dynamics of events in the real world.


2017 ◽  
Author(s):  
Kamesh Krishnamurthy ◽  
Ann M. Hermundstad ◽  
Thierry Mora ◽  
Aleksandra M. Walczak ◽  
Vijay Balasubramanian

Animals smelling in the real world use a small number of receptors to sense a vast number of natural molecular mixtures, and proceed to learn arbitrary associations between odors and valences. Here, we propose a new interpretation of how the architecture of olfactory circuits is adapted to meet these immense complementary challenges. First, the diffuse binding of receptors to many molecules compresses a vast odor space into a tiny receptor space, while preserving similarity. Next, lateral interactions “densify” and decorrelate the response, enhancing robustness to noise. Finally, disordered projections from the periphery to the central brain reconfigure the densely packed information into a format suitable for flexible learning of associations and valences. We test our theory empirically using data from Drosophila. Our theory suggests that the neural processing of olfactory information differs from the other senses in its fundamental use of disorder.


2019 ◽  
Author(s):  
Bobby Stojanoski ◽  
Stephen M. Emrich ◽  
Rhodri Cusack

AbstractWe rely upon visual short-term memory (VSTM) for continued access to perceptual information that is no longer available. Despite the complexity of our visual environments, the majority of research on VSTM has focused on memory for lower-level perceptual features. Using more naturalistic stimuli, it has been found that recognizable objects are remembered better than unrecognizable objects. What remains unclear, however, is how semantic information changes brain representations in order to facilitate this improvement in VSTM for real-world objects. To address this question, we used a continuous report paradigm to assess VSTM (precision and guessing rate) while participants underwent functional magnetic resonance imaging (fMRI) to measure the underlying neural representation of 96 objects from 4 animate and 4 inanimate categories. To isolate semantic content, we used a novel image generation method that parametrically warps images until they are no longer recognizable while preserving basic visual properties. We found that intact objects were remembered with greater precision and a lower guessing rate than unrecognizable objects (this also emerged when objects were grouped by category and animacy). Representational similarity analysis of the ventral visual stream found evidence of category and animacy information in anterior visual areas during encoding only, but not during maintenance. These results suggest that the effect of semantic information during encoding in ventral visual areas boosts visual short-term memory for real-world objects.


Author(s):  
Elizabeth Musz ◽  
Sharon L. Thompson-Schill

Semantic memory is composed of one’s accumulated world knowledge. This includes one’s stored factual information about the real-world objects and animals, which enables one to recognize and interact with the things in one’s environment. How is this semantic information organized, and where is it stored in the brain? Newly developed functional neuroimaging (fMRI) methods have provided exciting and innovative approaches to studying these questions. In particular, several recent fMRI investigations have examined the neural bases of semantic knowledge using similarity-based approaches. In similarity models, data from direct (i.e., neural) and indirect (i.e., subjective, psychological) measurements are interpreted as proximity data that provide information about the relationships among object concepts in an abstract, high-dimensional space. Concepts are encoded as points in this conceptual space, such that the semantic relatedness between two concepts is determined by their distance from one another. Using this approach, neuroimaging studies have offered compelling insights to several open-ended questions about how object concepts are represented in the brain. This chapter briefly describes how similarity spaces are computed from both behavioral data and spatially distributed fMRI activity patterns. Then, it reviews empirical reports that relate observed neural similarity spaces to various models of semantic similarity. The chapter examines how these methods have both shaped and informed our current understanding of the neural representation of conceptual information about real-world objects.


2016 ◽  
Author(s):  
Heeyoung Choo ◽  
Dirk B Walther

Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scenes-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions.


2007 ◽  
Vol 10 (4) ◽  
pp. 423-425 ◽  
Author(s):  
David Burr ◽  
Arianna Tozzi ◽  
M Concetta Morrone

Sign in / Sign up

Export Citation Format

Share Document