Size Invariance in Visual Object Priming of Gray-Scale Images

Perception ◽  
1995 ◽  
Vol 24 (7) ◽  
pp. 741-748 ◽  
Author(s):  
József Fiser ◽  
Irving Biederman

The strength of visual priming of briefly presented gray-scale pictures of real-world objects, measured by reaction times and errors in naming, was independent of whether the primed picture of the object was presented in the same size as or different size from the original picture. These findings replicate results on size invariance in shape recognition, which were obtained with line drawings, and extend them to the domain of gray-level images. Entry-level shape identification is based predominantly on scale-invariant representations incorporating orientation and depth discontinuities which are well captured by line drawings.

2019 ◽  
Vol 4 (6) ◽  
pp. 1482-1488
Author(s):  
Jennifer J. Thistle

Purpose Previous research with children with and without disabilities has demonstrated that visual–perceptual factors can influence the speech of locating a target on an array. Adults without disabilities often facilitate the learning and use of a child's augmentative and alternative communication system. The current research examined how the presence of symbol background color influenced the speed with which adults without disabilities located target line drawings in 2 studies. Method Both studies used a between-subjects design. In the 1st study, 30 adults (ages 18–29 years) located targets in a 16-symbol array. In the 2nd study, 30 adults (ages 18–34 years) located targets in a 60-symbol array. There were 3 conditions in each study: symbol background color, symbol background white with a black border, and symbol background white with a color border. Results In the 1st study, reaction times across groups were not significantly different. In the 2nd study, participants in the symbol background color condition were significantly faster than participants in the other conditions, and participants in the symbol background white with black border were significantly slower than participants in the other conditions. Conclusion Communication partners may benefit from the presence of background color, especially when supporting children using displays with many symbols.


2021 ◽  
Author(s):  
Piermatteo Morucci ◽  
Francesco Giannelli ◽  
Craig Richter ◽  
Nicola Molinaro

Hearing spoken words can enhance visual object recognition, detection and discrimination. Yet, the mechanism underlying this facilitation is incompletely understood. On one account, words would not bias visual processes at early levels, but rather interact at later decision-making stages. More recent proposals posit that words can alter visual processes at early stages by activating category-specific priors in sensory regions. A prediction of this account is that top-down priors evoke changes in occipital areas before the presentation of visual stimuli. Here, we tested the hypothesis that neural oscillations can serve as a mechanism to activate language-mediated visual priors. Participants performed a cue-picture matching task where cues were either spoken words, in their native or second language, or natural sounds, while EEG and reaction times were recorded. Behaviorally, we replicated the previously reported label-advantage effect, with images cued by words being recognized faster than those cued by natural sounds. A time-frequency analysis of cue-target intervals revealed that this label-advantage was associated with enhanced power in posterior alpha (9-11 Hz) and beta oscillations (17-19 Hz), both of which were larger when the image was preceded by a word compared to a natural sound. Prestimulus alpha and beta rhythms were correlated with reaction time performance, yet they appeared to operate in different ways. Reaction times were faster when alpha power increased, but slowed down with enhancement of beta oscillations. These results suggest that alpha and beta rhythms work in tandem to support language-mediated visual object recognition, while showing an inverse relationship to behavioral performance.


Author(s):  
Tapan Kumar Das

Logos are graphic productions that recall some real-world objects or emphasize a name, simply display some abstract signs that have strong perceptual appeal. Color may have some relevance to assess the logo identity. Different logos may have a similar layout with slightly different spatial disposition of the graphic elements, localized differences in the orientation, size and shape, or differ by the presence/absence of one or few traits. In this chapter, the author uses ensemble-based framework to choose the best combination of preprocessing methods and candidate extractors. The proposed system has reference logos and test logos which are verified depending on some features like regions, pre-processing, key points. These features are extracted by using gray scale image by scale-invariant feature transform (SIFT) and Affine-SIFT (ASIFT) descriptor method. Pre-processing phase employs four different filters. Key points extraction is carried by SIFT and ASIFT algorithm. Key points are matched to recognize fake logo.


2012 ◽  
Vol 25 (0) ◽  
pp. 121
Author(s):  
Marcia Grabowecky ◽  
Aleksandra Sherman ◽  
Satoru Suzuki

We have previously demonstrated a linear perceptual relationship between auditory amplitude-modulation (AM) rate and visual spatial-frequency using gabors as the visual stimuli. Can this frequency-based auditory–visual association influence perception of natural scenes? Participants consistently matched specific auditory AM rates to diverse visual scenes (nature, urban, and indoor). A correlation analysis indicated that higher subjective density ratings were associated with faster AM-rate matches. Furthermore, both the density ratings and AM-rate matches were relatively scale invariant, suggesting that the underlying crossmodal association is between visual coding of object-based density and auditory coding of AM rate. Based on these results, we hypothesized that concurrently presented fast (7 Hz) or slow (2 Hz) AM-rates might influence how visual attention is allocated to dense or sparse regions within a scene. We tested this hypothesis by monitoring eye movements while participants examined scenes for a subsequent memory task. To determine whether fast or slow sounds guided eye movements to specific spatial frequencies, we computed the maximum contrast energy at each fixation across 12 spatial frequency bands ranging from 0.06–10.16 cycles/degree. We found that the fast sound significantly guided eye movements toward regions of high spatial frequency, whereas the slow sound guided eye movements away from regions of high spatial frequency. This suggests that faster sounds may promote a local scene scanning strategy, acting as a ‘filter’ to individuate objects within dense regions. Our results suggest that auditory AM rate and visual object density are crossmodally associated, and that this association can modulate visual inspection of scenes.


Perception ◽  
1991 ◽  
Vol 20 (5) ◽  
pp. 585-593 ◽  
Author(s):  
Irving Biederman ◽  
Eric E Cooper
Keyword(s):  

1997 ◽  
Vol 50 (2) ◽  
pp. 274-289 ◽  
Author(s):  
Roberto Cabeza ◽  
A. Mike Burton ◽  
Stephen W. Kelly ◽  
Akamatsu Shigeru

The relation between imagery and perception was investigated in face priming. Two experiments are reported in which subjects either saw or imagined the faces of celebrities. They were later given a speeded perceptual test (familiarity judgement to pictures of celebrities) or a speeded imagery test (in which they were told the names of celebrities and asked to make a decision about their appearance). Seeing faces primed the perceptual test, and imaging faces primed the imagery test; however, there was no priming between seeing and imaging faces. These results show that perception and imagery can be dissociated in normal subjects. In two further experiments, we examined the effects of imaging faces on a subsequent face-naming task and on a task requiring familiarity judgements to partial faces. Both these tasks were facilitated by prior imaging of faces. These results are discussed in relation to those of McDermott & Roediger (1994), who found that imagery promoted object priming in a perceptual test involving naming partial line drawings. The implications for models of face recognition are also discussed.


2000 ◽  
Vol 9 (4) ◽  
pp. 310-318 ◽  
Author(s):  
Irene M. Barrow ◽  
Donald Holbert ◽  
Michael P. Rastatter

This study examined the effect of color on the naming process in children for pictures of increasing vocabulary difficulty levels. Picture-naming reaction times and accuracy rates were measured for both black and white line drawings and color drawings in 30 normally developing children, ages 4, 6, and 8 years, via a tachistoscopic viewing paradigm. Statistical analysis of reaction time data revealed that color affected speed of naming only when the vocabulary level of the picture was within the developmental range of the child. That is, for vocabulary within an emerging period for the child, colored drawings were named significantly faster than black and white line drawings. However, color did not significantly influence speed of naming for pictures either for vocabulary well established in the child’s lexicon or for vocabulary above the child’s developmental age. Statistical analysis of accuracy data revealed significant color by vocabulary interactions. Specifically, when the vocabulary level of the pictures exceeded chronological age level, children named color drawings with significantly higher accuracy rates than black and white line drawings.


Sign in / Sign up

Export Citation Format

Share Document