Attentional cueing by cross-modal congruency produces both facilitation and inhibition on short-term visual recognition

2014 ◽  
Vol 152 ◽  
pp. 75-83 ◽  
Author(s):  
Elena Makovac ◽  
Sze Chai Kwok ◽  
Walter Gerbino
1976 ◽  
Vol 28 (3) ◽  
pp. 325-337 ◽  
Author(s):  
D. C. Mitchell

According to Sperling's (1967) model of short-term memory briefly presented masked stimuli are rapidly read into a non-visual Recognition Buffer (the RB model). An alternative interpretation of the data is that the stimulus information is coded into a non-iconic Visual Buffer where it is held while a much slower recognition process takes place (the VB model). The high frequency of errors in experiments with sequentially presented stimuli appears to refute the possibility that recognition is as rapid as suggested by the RB model. However these data may be attributed to variations in effective stimulus duration and stimulus quality rather than to slow recognition time. In an experiment to control for these effects, normal, laterally inverted and spaced digits were presented in a rapid sequence (1–10 items/s) with intervening pattern masks to keep the stimulus/mask interval constant. The recall data showed that order errors increased with rate of presentation but that item errors remained invariant. At the fastest rates of presentation there were fewer order errors for spaced than for coincident digits. It was argued that the results, as a whole, were more consistent with the VB than the RB model and that there is no evidence for identification times as fast as 10–40 ms/item.


1983 ◽  
Vol 35 (2b) ◽  
pp. 169-194 ◽  
Author(s):  
Euan M. Macphail

Two series of experiments investigated short-term visual recognition memory in pigeons following lesions of the hyperstriatal complex; the first series used a choice technique, the second, a single-key go/no go technique. The results of the two series agreed, first, in finding impaired performance in hyperstriatal birds at long but not at short inter-trial intervals, and, second, in obtaining no evidence of differential rates of decay of traces in hyperstriatal and control subjects. A final experiment confirmed that the hyperstriatal birds were, as expected from previous work, impaired on reversals of colour and position discriminations. It is tentatively suggested that deficits following hyperstriatal damage in both recognition and reversal performance may be understood as being the consequence of an increased susceptibility to frustrating events in hyperstriatal subjects.


1980 ◽  
Vol 32 (4) ◽  
pp. 521-538 ◽  
Author(s):  
Euan M. Macphail

Recognition memory for lists of items was investigated in pigeons using a YES-NO recognition technique. Experiment I showed that increasing the exposure duration of the first item of a two-item list improved recognition for that item without impairing recognition of the second item. Experiment II showed that decreasing the inter-trial interval had no effect on correct YES responses but significantly increased the number of false YES responses. Experiment III showed that recognition for the last two items of a three-item list was no poorer than that for lists of only two items. Experiment IV showed that increasing the delay between presentation and test of a two-item list (from 0·25-1 s) had a more disruptive effect on recognition for the second than for the first item. The data from these four experiments support a model proposed by Roberts and Grant, according to which memory traces are independent, and decay as a negatively accelerated function of time. Experiments V, VI, and VII investigated recognition for lists of three, four, and five items, and found no evidence for a primacy effect, performance being a linear function of time since sample offset.


2006 ◽  
Vol 21 (3) ◽  
pp. 632-637 ◽  
Author(s):  
Robert Sekuler ◽  
Chris McLaughlin ◽  
Michael J. Kahana ◽  
Arthur Wingfield ◽  
Yuko Yotsumoto

Author(s):  
Sujeet Kumar Shukla ◽  
Saurabh Dubey ◽  
Aniket Kumar Pandey ◽  
Vineet Mishra ◽  
Mayank Awasthi ◽  
...  

In this paper, we focus on one of the visual recognition facets of computer vision, i.e. image captioning. This model’s goal is to come up with captions for an image. Using deep learning techniques, image captioning aims to generate captions for an image automatically. Initially, a Convolutional Neural Network is used to detect the objects in the image (InceptionV3). Recurrent Neural Networks (RNN) and Long Short Term Memory (LSTM) with attention mechanism are used to generate a syntactically and semantically correct caption for the image based on the detected objects. In our project, we're working with a traffic sign dataset that has been captioned using the process described above. This model is extremely useful for visually impaired people who need to cross roads safely.


1977 ◽  
Vol 29 (1) ◽  
pp. 117-133 ◽  
Author(s):  
W. A. Phillips ◽  
D. F. M. Christie

Visual recognition memory for a sequence of non-verbalized patterns is shown to have a large and clearly defined recency effect. This recency effect occurs with random list lengths and therefore cannot be due to differential processing of the end items. The effect is completely removed by just 3 s of mental arithmetic but survives for at least 10 s over unfilled intervals. Recognition memory for patterns at other serial positions is slower, less accurate, and shows no primacy effect; performance at these earlier serial positions is dependent upon the time for which patterns are initially presented, but is unaffected by the duration of the retention interval, mental arithmetic, and the time between patterns on initial presentation. These findings provide evidence that visual memory has two components that are closely analogous to the short-term (STM) and long-term (LTM) components of verbal memory. Visual STM, here called visualization, has a capacity of one pattern, cannot be activated LTM, and does not seem to be the gateway to LTM.


Sign in / Sign up

Export Citation Format

Share Document