Global Interference: The Effect of Exposure Duration That is Substituted for Spatial Frequency

Perception ◽  
10.1068/p3282 ◽  
2002 ◽  
Vol 31 (3) ◽  
pp. 341-348 ◽  
Author(s):  
Yuko Hibi ◽  
Yuji Takeda ◽  
Akihiro Yagi

In this study, participants were required to identify hierarchically structured patterns that appeared at either global or local level. Paquet and Merikle (1984 Canadian Journal of Psychology381 45–53) showed that global interference is affected by exposure duration in the processing of a hierarchical structure. They showed that only global-to-local interference occurred at short exposure durations. In contrast, global-to-local as well as local-to-global interference was observed at long exposure durations. They suggested that the effect of exposure duration with global interference depends on the high-spatial-frequency versus low-spatial-frequency channel. In the present study, exposure duration (short or long) was varied randomly from trial to trial (experiment 1), or held constant (experiment 2). In experiment 1, global-to-local interference occurred at both short and long exposure durations, even though the same physical properties existed as in experiment 2. In experiment 2, both global-to-local and local-to-global interference occurred at only long exposure durations, in line with the results reported by Paquet and Merikle. This suggests that the effect of exposure duration on global interference is explained not only by spatial-frequency channels, but also by attentional shift.

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 121-121
Author(s):  
M V Danilova ◽  
V M Bondarko ◽  
Y E Shelepin

Two sets of psychophysical experiments were carried out to find a qualitative measure of the complexity of visual images. The stimuli were 15 Chinese ideograms of the same size. In experiment 1, observers were asked to rate the complexity of images. In experiment 2, for each stimulus the threshold size was determined, defined as the smallest size for which the perceived quality of the image was the same as for large (2 deg) stimuli, ie all details were clearly seen and the stimuli had the same contrast. The measured threshold sizes were in the range 7.9 – 27.6 min arc. Analysing the data further, we found that for some ideograms the sizes of the minimal details (strokes, dots) corresponded to the resolution limit (1 min arc). Some ideograms contained parts with parallel stripes forming quasi-gratings. The distances between stripes at threshold were 1.8 min arc which corresponds to the tuning frequency of the highest spatial frequency channel (Wilson et al, 1983 Vision Research23 873 – 882). The average order of ideograms sorted by degree of complexity was similar to the order according to threshold size. Thus we found a direct correspondence between the complexity of an object and a description in terms of the minimal number of elements needed to preserve the quality of a reduced image. Our results are in agreement with concepts of complexity expressed as a number of details in objects as suggested by Landolt and Snellen, or as a number of spatial-frequency channels as suggested by Ginsburg (1971 IEEE Proceedings 283 – 290).


2002 ◽  
Vol 19 (3) ◽  
pp. 225-232
Author(s):  
Xiaojun Wu ◽  
Qinye Yin ◽  
Zheng Zhao ◽  
Aigang Feng ◽  
Jianguo Zhang

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 331-331
Author(s):  
R Rosenholtz

Beck suggested that texture segmentation is based upon differences in the first-order statistics of stimulus features such as orientation, size, and contrast. However, this theory does not indicate how these differences might be quantified, or what properties of the statistics might be used. Some alternative models postulate that texture segmentation is determined by the responses of spatial-frequency channels, where the channels contain both a linear filtering mechanism and various nonlinearities. Such models do a good job of predicting human performance, but do not give us much insight into what textures will segment, since the comparison carried out by the model is often obscured by the details of the filtering, nonlinearity, and image-based decision processes. It is suggested here that, for orientation-defined textures (eg in which each ‘texel’ has a single orientation), segmentation is well-described by something like the ‘significance’ of the differences between (1) the mean orientations, and (2) the angular variances of the two textures. The ‘significance’ of the difference in means takes into account the variability in the texture, so that two homogeneous textures with means differing by 30° may easily segment, while two heterogeneous textures with the same difference in mean may not. Furthermore, it is shown that these statistics may be computed in a biologically plausible way, which greatly resembles the typical filter-based approaches to texture segmentation. Thus the connection between statistical theories of texture segmentation and spatial-frequency channel models becomes more transparent.


2018 ◽  
Author(s):  
Christopher Patrick Taylor

What information is used by the visual system to detect patterns? A standard modelhypothesizes that both spatial frequency and orientation information are processed byindependent channels, meaning there is no summation among channels. Despite the consensus among researchers on how the visual system sums spatial frequency and orientation information, there are data in the literature (Kersten, 1987) that ostensibly contradict the standard model. To resolve this conflict, we measured the e?ciency of spatial frequency and orientation of ?ltered noise. To learn what information the visual system uses when detecting ?ltered noise, we applied a technique that can determine the information used to detect and discriminate ?ltered visual noise. In Chapter 2 the detection of spatial frequency ?ltered noise is not only e?cient but remains so with stimulus uncertainty and extremely brief (10ms) stimulus duration. When the spatial frequency channel used wasmeasured, we found a fi?xed bandwidth channel as the spatial frequency of the pattern was increased. To test the standard model, we implemented simulations of the standard model and contrary to the interpretation, the standard model could predict detection of spatial frequency ?ltered noise. Chapter 3 used spatial frequency filtered noise to relate the detection and discrimination of ?ltered visual noise. A simple rule relates what information observers use to detect and discriminate spatial frequency ?ltered noise. Chapter 4 extends the work of Chapter 2 to orientation information and found that orientation fi?ltered noise is detected efficiently. We again measured what information observers used and found that unlike SF ?filtered noise, observers use orientation in a flexible or adjustablemanner.


Sign in / Sign up

Export Citation Format

Share Document