scholarly journals Object-location binding: Does spatial location influence high-level judgments of face images?

2016 ◽  
Vol 16 (12) ◽  
pp. 409 ◽  
Author(s):  
Michela Paradiso ◽  
Anna Shafer-Skelton ◽  
Aleix Martinez ◽  
Julie Golomb
2021 ◽  
Author(s):  
Yuri Markov ◽  
Igor Utochkin

Visual working memory (VWM) is prone to interference from stored items competing for its limited capacity. These competitive interactions can arise from different sources. For example, one such source is poor item distinctiveness causing a failure to discriminate between items sharing common features. Another source of interference is imperfect binding, a problem of determining which of the remembered features belonged to which object or which item was in which location. In two experiments, we studied how the conceptual distinctiveness of real-world objects (i.e., whether the objects belong to the same or different basic categories) affects VWM for objects and object-location binding. In Experiment 1, we found that distinctiveness did not affect memory for object identities or for locations, but low-distinctive objects were more frequently reported at “swapped” locations that originally went with different objects. In Experiment 2 we found evidence that the effect of distinctiveness on the object-location swaps was due to the use of categorical information for binding. In particular, we found that observers swapped the location of a tested object with another object from the same category more frequently than with any of the objects from another category. This suggests that observers can use some coarse category-location information when objects are conceptually distinct. Taken together, our findings suggest that object distinction and object-location binding act upon different components of VWM.


2022 ◽  
Author(s):  
Sami Ryan Yousif

Mental representations are the essence of cognition. Yet, to understand how the mind works, we must understand not just the content of mental representations (i.e., what information is stored), but also the format of those representations (i.e., how that information is stored). But what does it mean for representations to be formatted? How many formats are there? Is it possible that the mind represents some pieces of information in multiple formats at once? To address these questions, I discuss a ‘case study’ of representational format: the representation of spatial location. I review work (a) across species and across development, (b) across spatial scales, and (c) across levels of analysis (e.g., high-level cognitive format vs. low-level neural format). Along the way, I discuss the possibility that the same information may be organized in multiple formats simultaneously (e.g., that locations may be represented in both Cartesian and polar coordinates). Ultimately, I argue that seemingly ‘redundant’ formats may support the flexible spatial behavior observed in humans, and that we should approach the study of all mental representations with this possibility in mind.


Hippocampus ◽  
2019 ◽  
Vol 29 (10) ◽  
pp. 971-979 ◽  
Author(s):  
Veronica Muffato ◽  
Christopher Hilton ◽  
Chiara Meneghetti ◽  
Rossana De Beni ◽  
Jan M. Wiener

PLoS ONE ◽  
2012 ◽  
Vol 7 (10) ◽  
pp. e48214 ◽  
Author(s):  
Yoni Pertzov ◽  
Mia Yuan Dong ◽  
Muy-Cheng Peich ◽  
Masud Husain

Author(s):  
Cesar G. Pachon-Suescun ◽  
Carlos J. Enciso-Aragon ◽  
Robinson Jimenez-Moreno

In the field of robotics, it is essential to know the work area in which the agent is going to develop, for that reason, different methods of mapping and spatial location have been developed for different applications. In this article, a machine vision algorithm is proposed, which is responsible for identifying objects of interest within a work area and determining the polar coordinates to which they are related to the observer, applicable either with a fixed camera or in a mobile agent such as the one presented in this document. The developed algorithm was evaluated in two situations, determining the position of six objects in total around the mobile agent. These results were compared with the real position of each of the objects, reaching a high level of accuracy with an average error of 1.3271% in the distance and 2.8998% in the angle.


2019 ◽  
Author(s):  
Aaron Blaisdell

We studied object-location binding in pigeons using a sequence learning procedure. A sequence of four objects was presented, one at a time at one of four locations on a touchscreen. A single peck at the object ended the trial, and food reinforcement was delivered intermittently. In Experiment 1, a between-subjects design was used to present objects, locations, or both in a regular sequence or randomly. Response time costs on nonreinforced probe tests on which object order, location order, or both were disrupted revealed sequence learning effects. Pigeons encoded location order when it was consistent, but not object order when it alone was consistent. When both were consistent, pigeons encoded both, and also showed evidence of object-location binding. In Experiment 2, two groups of pigeons received training on sequences where the same object always appeared at the same location. For some pigeons a consistent sequence was used while for others sequence order was randomized. Only when sequence order was consistent was object-location binding found. These experiments are the first demonstrations of strong and lasting feature binding in pigeons.


2005 ◽  
Vol 35 (6) ◽  
pp. 949-963 ◽  
Author(s):  
Richard M. Gorman ◽  
D. Murray Hicks

Abstract Modern measurement techniques such as aerial laser scanning allow for rapid determination of the spatial variation of sea surface elevation. Wave fields obtained from such data show spatial inhomogeneity associated with the presence of wave groups. A method based on two-dimensional directional wavelet analysis is described by which such inhomogeneity can be characterized in the spatial and wavenumber domains. The directional wavelet method has been applied to aerial laser scanning measurements of nearshore wave conditions off the east coast of New Zealand’s South Island. A high level of spatial variability was observed, with evidence of ensembles of wave-group envelopes of quasi-Gaussian form. These envelopes occur, with variations in spatial location, across a range of wavelet scales and directions.


Author(s):  
Decheng Liu ◽  
Nannan Wang ◽  
Chunlei Peng ◽  
Jie Li ◽  
Xinbo Gao

Heterogeneous face recognition (HFR) is a challenging problem in face recognition, subject to large texture and spatial structure differences of face images. Different from conventional face recognition in homogeneous environments, there exist many face images taken from different sources (including different sensors or different mechanisms) in reality. Motivated by human cognitive mechanism, we naturally utilize the explicit invariant semantic information (face attributes) to help address the gap of different modalities. Existing related face recognition methods mostly regard attributes as the high level feature integrated with other engineering features enhancing recognition performance, ignoring the inherent relationship between face attributes and identities. In this paper, we propose a novel deep attribute guided representation based heterogeneous face recognition method (DAG-HFR) without labeling attributes manually. Deep convolutional networks are employed to directly map face images in heterogeneous scenarios to a compact common space where distances mean similarities of pairs. An attribute guided triplet loss (AGTL) is designed to train an end-to-end HFR network which could effectively eliminate defects of incorrectly detected attributes. Extensive experiments on multiple heterogeneous scenarios (composite sketches, resident ID cards) demonstrate that the proposed method achieves superior performances compared with state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document