Visual Recognition as Controlled Search of Complicated Fragments

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 89-89
Author(s):  
V M Krol

We tested the hypothesis that object recognition is an active search of complicated fragments in the visual image. This search is performed in accordance with criteria based on invariant descriptions of an object's perceptual class. The basic strategy is to activate these descriptions during parallel search. ‘Upper’ segments search for appropriate fragments of the picture. ‘Subordinate’ segments are included by request of the ‘upper’ segments. Description segments include three types of records: integral (whole) characteristics of some fragment; characteristics of fragments which are members of this fragment; and characteristics of relations between the fragments. This structure of perceptual description permits parallel analysis of the visual scene by different segments by the ‘autonomy’ principle and permits the use of incomplete sets of segments for recognition by the ‘quorum’ principle. Different ways of forming connections between segment records may be considered as ‘thinking’ components of visual perception. The main points of our model follow from results of our tachistoscopic experiments. We measured thresholds for the recognition of test figures. Different levels of figure complexity were used: parallel lines and strips, geometric figures, schematic faces, textures, etc. It was found that the stages of the recognition process are connected with types of operations described in our model. These results give rise to the possibility that the properties of the neurons involved in visual search might be identified.

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Matan Mazor ◽  
Rani Moran ◽  
Stephen M Fleming

Abstract People have better metacognitive sensitivity for decisions about the presence compared to the absence of objects. However, it is not only objects themselves that can be present or absent, but also parts of objects and other visual features. Asymmetries in visual search indicate that a disadvantage for representing absence may operate at these levels as well. Furthermore, a processing advantage for surprising signals suggests that a presence/absence asymmetry may be explained by absence being passively represented as a default state, and presence as a default-violating surprise. It is unknown whether the metacognitive asymmetry for judgments about presence and absence extends to these different levels of representation (object, feature, and default violation). To address this question and test for a link between the representation of absence and default reasoning more generally, here we measure metacognitive sensitivity for discrimination judgments between stimuli that are identical except for the presence or absence of a distinguishing feature, and for stimuli that differ in their compliance with an expected default state.


2021 ◽  
Author(s):  
Matan Mazor ◽  
Rani Moran ◽  
Stephen M Fleming

People have better metacognitive sensitivity for decisions about the presence compared to the absence of objects. However, it is not only objects themselves that can be present or absent, but also parts of objects and other visual features. Asymmetries in visual search indicate that a disadvantage for representing absence may operate at these levels as well. Furthermore, a processing advantage for surprising signals suggests that a presence/absence asymmetry may be explained by absence being passively represented as a default state, and presence as a default-violating surprise. It is unknown whether metacognitive asymmetry for judgements about presence and absence extend to these different levels of representation (object, feature, and default-violation). To address this question and test for a link between the representation of absence and default reasoning more generally, here we measure metacognitive sensitivity for discrimination judgments between stimuli that are identical except for the presence or absence of a distinguishing feature, and for stimuli that differ in their compliance with an expected default state.


2020 ◽  
Author(s):  
Bahareh Jozranjbar ◽  
Arni Kristjansson ◽  
Heida Maria Sigurdardottir

While dyslexia is typically described as a phonological deficit, recent evidence suggests that ventral stream regions, important for visual categorization and object recognition, are hypoactive in dyslexic readers who might accordingly show visual recognition deficits. By manipulating featural and configural information of faces and houses, we investigated whether dyslexic readers are disadvantaged at recognizing certain object classes or utilizing particular visual processing mechanisms. Dyslexic readers found it harder to recognize objects (houses), suggesting that visual problems in dyslexia are not completely domain-specific. Mean accuracy for faces was equivalent in the two groups, compatible with domain-specificity in face processing. While face recognition abilities correlated with reading ability, lower house accuracy was nonetheless related to reading difficulties even when accuracy for faces was kept constant, suggesting a specific relationship between visual word recognition and the recognition of non-face objects. Representational similarity analyses (RSA) revealed that featural and configural processes were clearly separable in typical readers, while dyslexic readers appeared to rely on a single process. This occurred for both faces and houses and was not restricted to particular visual categories. We speculate that reading deficits in some dyslexic readers reflect their reliance on a single process for object recognition.


Perception ◽  
1996 ◽  
Vol 25 (7) ◽  
pp. 861-874 ◽  
Author(s):  
Rick Gurnsey ◽  
Frédéric J A M Poirier ◽  
Eric Gascon

Davis and Driver presented evidence suggesting that Kanizsa-type subjective contours could be detected in a visual search task in a time that is independent of the number of nonsubjective contour distractors. A linking connection was made between these psychophysical data and the physiological data of Peterhans and von der Heydt which showed that cells in primate area V2 respond to subjective contours in the same way that they respond to luminance-defined contours. Here in three experiments it is shown that there was sufficient information in the displays used by Davis and Driver to support parallel search independently of whether subjective contours were present or not. When confounding properties of the stimuli were eliminated search became slow whether or not subjective contours were present in the display. One of the slowest search conditions involved stimuli that were virtually identical to those used in the physiological studies of Peterhans and von der Heydt to which Davis and Driver wish to link their data. It is concluded that while subjective contours may be represented in the responses of very early visual mechanisms (eg in V2) access to these representations is impaired by high-contrast contours used to induce the subjective contours and nonsubjective figure distractors. This persistent control problem continues to confound attempts to show that Kanizsa-type subjective contours can be detected in parallel.


Author(s):  
Abd El Rahman Shabayek ◽  
Olivier Morel ◽  
David Fofi

For long time, it was thought that the sensing of polarization by animals is invariably related to their behavior, such as navigation and orientation. Recently, it was found that polarization can be part of a high-level visual perception, permitting a wide area of vision applications. Polarization vision can be used for most tasks of color vision including object recognition, contrast enhancement, camouflage breaking, and signal detection and discrimination. The polarization based visual behavior found in the animal kingdom is briefly covered. Then, the authors go in depth with the bio-inspired applications based on polarization in computer vision and robotics. The aim is to have a comprehensive survey highlighting the key principles of polarization based techniques and how they are biologically inspired.


2013 ◽  
pp. 896-926
Author(s):  
Mehrtash Harandi ◽  
Javid Taheri ◽  
Brian C. Lovell

Recognizing objects based on their appearance (visual recognition) is one of the most significant abilities of many living creatures. In this study, recent advances in the area of automated object recognition are reviewed; the authors specifically look into several learning frameworks to discuss how they can be utilized in solving object recognition paradigms. This includes reinforcement learning, a biologically-inspired machine learning technique to solve sequential decision problems and transductive learning, and a framework where the learner observes query data and potentially exploits its structure for classification. The authors also discuss local and global appearance models for object recognition, as well as how similarities between objects can be learnt and evaluated.


2010 ◽  
pp. 181-194
Author(s):  
J Wagemans ◽  
K Verfaillie ◽  
E Ver Eecke ◽  
G d’Ydewalle

Sign in / Sign up

Export Citation Format

Share Document