scholarly journals Exploring the role of stimulus similarity on the summation effect in causal learning

2017 ◽  
Author(s):  
Omar D. Pérez ◽  
Rene San Martín ◽  
Fabián A. Soto

AbstractSeveral contemporary models of associative learning anticipate that the higher responding to a compound of two cues separately trained with a common outcome than to each of the cues alone -a summation effect-is modulated by the similarity between the cues forming the compound. Here, we explored this hypothesis in a series of causal learning experiments with humans. Participants were presented with two visual cues that separately predicted a common outcome and later asked for the outcome predicted by the compound of the two cues. Importantly, the cues’ similarity was varied between groups through changes in shape, spatial position, color, configuration and rotation. In variance with the predictions of these models, we observed similar and strong levels of summation in both groups across all manipulations of similarity (Experiments 1-5). The summation effect was significantly reduced by manipulations intended to impact assumptions about the causal independence of the cues forming the compound, but this reduction was independent of stimulus similarity (Experiment 6). These results are problematic for similarity-based models and can be more readily explained by rational approaches to causal learning.

Author(s):  
Omar D. Pérez ◽  
René San Martín ◽  
Fabián A. Soto

Abstract. Several contemporary models anticipate that the summation effect is modulated by the similarity between the cues forming a compound. Here, we explore this hypothesis in a series of causal learning experiments. Participants were presented with two visual cues that separately predicted a common outcome and later asked for the outcome predicted by the compound of the two cues. Similarity was varied between groups through changes in shape, spatial position, color, configuration, and rotation. In variance with the predictions of these models, we observed similar and strong levels of summation in both groups across all manipulations of similarity. The effect, however, was significantly reduced by manipulations intended to impact assumptions about the causal independence of the cues forming the compound, but this reduction was independent of stimulus similarity. These results are problematic for similarity-based models and can be more readily explained by rational approaches to causal learning.


2021 ◽  
Author(s):  
Xue Li Lim ◽  
Richard Höchenberger ◽  
Iryna Ruda ◽  
Gereon Fink ◽  
Shivakumar Viswanathan ◽  
...  

Abstract Remembering a particular taste is crucial in food intake and associative learning. We investigated whether taste can be dynamically encoded, maintained, and retrieved on short time-scales consistent with working memory (WM). We used novel single and multi-item taste recognition tasks to investigate the organization and capacity of gustatory WM. In Experiment 1, we show that a single taste can be reliably recognized despite multiple oro-sensory interferences suggesting active and resilient maintenance. When multiple tastes were presented, the resolution with which these could be maintained, depended on their serial position implying a role of attention. Participants reliably recognized up to three tastes, compatible with a limited capacity of gustatory WM. Lastly, recognition was better for match than foil trials likely due to increased stimulus similarity in foil trials. Together, the results advocate a hybrid model of gustatory WM with a limited number of slots where items are stored with varying precision.


Author(s):  
Adam F. Werner ◽  
Jamie C. Gorman

Objective This study examines visual, auditory, and the combination of both (bimodal) coupling modes in the performance of a two-person perceptual-motor task, in which one person provides the perceptual inputs and the other the motor inputs. Background Parking a plane or landing a helicopter on a mountain top requires one person to provide motor inputs while another person provides perceptual inputs. Perceptual inputs are communicated either visually, auditorily, or through both cues. Methods One participant drove a remote-controlled car around an obstacle and through a target, while another participant provided auditory, visual, or bimodal cues for steering and acceleration. Difficulty was manipulated using target size. Performance (trial time, path variability), cue rate, and spatial ability were measured. Results Visual coupling outperformed auditory coupling. Bimodal performance was best in the most difficult task condition but also high in the easiest condition. Cue rate predicted performance in all coupling modes. Drivers with lower spatial ability required a faster auditory cue rate, whereas drivers with higher ability performed best with a lower rate. Conclusion Visual cues result in better performance when only one coupling mode is available. As predicted by multiple resource theory, when both cues are available, performance depends more on auditory cueing. In particular, drivers must be able to transform auditory cues into spatial actions. Application Spotters should be trained to provide an appropriate cue rate to match the spatial ability of the driver or pilot. Auditory cues can enhance visual communication when the interpersonal task is visual with spatial outputs.


2011 ◽  
Vol 2011 ◽  
pp. 1-5 ◽  
Author(s):  
Atsushi Hirao

In avian mating systems, male domestic fowls are polygamous and mate with a number of selected members of the opposite sex. The factors that influence mating preference are considered to be visual cues. However, several studies have indicated that chemosensory cues also affect socio-sexual behavior, including mate choice and individual recognition. The female uropygial gland appears to provide odor for mate choice, as uropygial gland secretions are specific to individual body odor. Chicken olfactory bulbs possess efferent projections to the nucleus taeniae that are involved in copulatory behavior. From various reports, it appears that the uropygial gland has the potential to act as the source of social odor cues that dictate mate choice. In this review, evidence for the possible role of the uropygial gland on mate choice in domestic chickens is presented. However, it remains unclear whether a relationship exists between the uropygial gland and major histocompatibility complex-dependent mate choice.


Author(s):  
Nada Zwayyid Almutairi ◽  
Eman Salah Ibrahim Rizk

This study explores interactive e-book cues and Information Processing Levels (IPL)’s effectiveness on Learning Retention (LR) and External Cognitive Load (ECL). 117 middle school pupils (MSP) were divided into six experimental groups based on their IPL and cues during the second term of the academic year 2019–2020. Visual Cues (VC)/Audiovisual Cues (VAC) and Auditory Cues (AC)/Audiovisual Cues (VAC) statistically varied in the Ie-book in LR test and ECL scale, same for the average scores when testing the LR in Science for MSP due to the difference between IPL for the DL. There is a statistically significant effect of cue types' interaction in Ie-book with IPL in ECL scale for MSP, at its highest peak in the case of the AVC with DL, followed by the interaction resulting from the VC with DL then AC with SL. Also, cues interaction in Ie-book with IPL immensely affect the LR test for MEP, which is at its highest peak in the case of the AVC with DL. The interactions between (DL–SL) and (AC–VC) seem to equally influence the ELC.


1993 ◽  
Vol 3 (3) ◽  
pp. 307-314 ◽  
Author(s):  
H. Mittelstaedt ◽  
S. Glasauer

This contribution examines the consequences of two remarkable experiences of subjects in weightlessness, 1) the missing of sensations of trunk tilt and of the respective concomitant reflexes when the head is tilted with respect to the trunk, and 2) the persistence of a perception of “up” and “down,” that is, of the polarity of the subjective vertical (SV) in the absence of, as well as in contradiction to, visual cues. The first disproves that the necessary head-to-trunk coordinate transformation be achieved by adding representations of the respective angles gained by utricles and neck receptors, but corroborates an extant model of cross-multiplication of utricular, saccular, and neck receptor components. The second indicates the existence of force-independent components in the determination of the SV. Although the number of subjects is still small and experimental conditions are not as homogeneous as desired, measurements and/or reports on the ground, in parabolic, and in space flight point to the decisive role of the saccular z-bias, that is, of a difference of the mean resting discharges of saccular units polarized in the rostrad and the caudad (±z-) direction.


1999 ◽  
Vol 9 (6) ◽  
pp. 445-451
Author(s):  
S. Di Girolamo ◽  
W. Di Nardo ◽  
A. Cosenza ◽  
F. Ottaviani ◽  
A. Dickmann ◽  
...  

The role of vision in postural control is crucial and is strictly related to the characteristics of the visual stimulus and to the performance of the visual system. The purpose of this investigation was to evaluate the effects of chronically reduced visual cues upon postural control in patients affected by Congenital Nystagmus (CN). These patients have developed since birth a postural strategy mainly based on vestibular and somatosensorial cues. Fifteen patients affected by CN and 15 normal controls (NC) were enrolled in the study and evaluated by means of dynamic posturography. The overall postural control in CN patients was impaired as demonstrated by the equilibrium score and by the changes of the postural strategy. This impairment was even more enhanced in CN than in NC group when somatosensorial cues were experimentally reduced. An aspecific pattern of visual impairment and a pathological composite score were also present. Our data outline that in patients affected by CN an impairment of the postural balance is present especially when the postural control relies mainly on visual cues. Moreover, a decrease in accuracy of the somatosensory cues has a proportionally greater effect on balance than it has on normal subjects.


2018 ◽  
Vol 40 (1) ◽  
pp. 93-109
Author(s):  
YI ZHENG ◽  
ARTHUR G. SAMUEL

AbstractIt has been documented that lipreading facilitates the understanding of difficult speech, such as noisy speech and time-compressed speech. However, relatively little work has addressed the role of visual information in perceiving accented speech, another type of difficult speech. In this study, we specifically focus on accented word recognition. One hundred forty-two native English speakers made lexical decision judgments on English words or nonwords produced by speakers with Mandarin Chinese accents. The stimuli were presented as either as videos that were of a relatively far speaker or as videos in which we zoomed in on the speaker’s head. Consistent with studies of degraded speech, listeners were more accurate at recognizing accented words when they saw lip movements from the closer apparent distance. The effect of apparent distance tended to be larger under nonoptimal conditions: when stimuli were nonwords than words, and when stimuli were produced by a speaker who had a relatively strong accent. However, we did not find any influence of listeners’ prior experience with Chinese accented speech, suggesting that cross-talker generalization is limited. The current study provides practical suggestions for effective communication between native and nonnative speakers: visual information is useful, and it is more useful in some circumstances than others.


1993 ◽  
Vol 21 (3) ◽  
pp. 266-280 ◽  
Author(s):  
Ariane S. Etienne ◽  
Sylvie Joris Lambert ◽  
Benoit Reverdin ◽  
Evelyne Teroni

1963 ◽  
Vol 76 (1) ◽  
pp. 148
Author(s):  
James H. Straughan
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document