The Effect of Semantic Congruence between Color and Music on Compact Car Advertisement Evaluation

2020 ◽  
Vol 71 (0) ◽  
pp. 319-329
Author(s):  
Chang-yeop Shin
Keyword(s):  
2014 ◽  
Vol 75 ◽  
pp. 59-66 ◽  
Author(s):  
Luca Turchet ◽  
Stefania Serafin

2021 ◽  
Vol 11 (9) ◽  
pp. 1206
Author(s):  
Erika Almadori ◽  
Serena Mastroberardino ◽  
Fabiano Botta ◽  
Riccardo Brunetti ◽  
Juan Lupiáñez ◽  
...  

Object sounds can enhance the attentional selection and perceptual processing of semantically-related visual stimuli. However, it is currently unknown whether crossmodal semantic congruence also affects the post-perceptual stages of information processing, such as short-term memory (STM), and whether this effect is modulated by the object consistency with the background visual scene. In two experiments, participants viewed everyday visual scenes for 500 ms while listening to an object sound, which could either be semantically related to the object that served as the STM target at retrieval or not. This defined crossmodal semantically cued vs. uncued targets. The target was either in- or out-of-context with respect to the background visual scene. After a maintenance period of 2000 ms, the target was presented in isolation against a neutral background, in either the same or different spatial position as in the original scene. The participants judged the same vs. different position of the object and then provided a confidence judgment concerning the certainty of their response. The results revealed greater accuracy when judging the spatial position of targets paired with a semantically congruent object sound at encoding. This crossmodal facilitatory effect was modulated by whether the target object was in- or out-of-context with respect to the background scene, with out-of-context targets reducing the facilitatory effect of object sounds. Overall, these findings suggest that the presence of the object sound at encoding facilitated the selection and processing of the semantically related visual stimuli, but this effect depends on the semantic configuration of the visual scene.


2019 ◽  
Vol 13 ◽  
Author(s):  
Zhaohua Lu ◽  
Qi Li ◽  
Ning Gao ◽  
Jingjing Yang ◽  
Ou Bai

Author(s):  
Guoying Lu ◽  
Guanhua Hou

Objective The purpose of this study was to investigate the effects of semantic congruence and incongruence on sign identification by using event-related potentials (ERPs). Background Sign systems have crucial roles in public spaces and traffic facilities. Poorly designed signs can easily confuse pedestrians and drivers and reduce the efficiency of public activities and urban administration. Method Thirty-one participants completed a sign identification experiment independently in a laboratory setting. Experimental materials were selected from GB/T 10001, a Chinese national recommendation standard that is officially named Public Information Graphical Symbols for Use on Signs. All ERP data were processed using MATLAB 13b, and behavioral data were analyzed using Stata 14. Results N170, P200, N300, and N400 components were induced during semantic processing. Statistical analysis revealed that semantic congruence has a main effect on N300 in the frontal region and has a main effect on N400 at FZ in the frontal region, CPZ in the parietal-central region, and PZ in the parietal region. Amplitudes of N300 induced by picture–word matching were considerably different between the two experimental conditions at electrodes FZ and FCZ. Amplitudes of N400 were significantly larger in the incongruent condition than in the congruent condition. Conclusion The study demonstrated that N300 and N400 are promising indicators for measuring semantic congruence in future sign design. Application Our findings provide ERP indicators for measuring the semantic congruence of sign design, which can be easily applied to improve the efficiency of sign design and sign comprehension.


2016 ◽  
Vol 37 (2) ◽  
pp. 291-301 ◽  
Author(s):  
Pau A. Packard ◽  
Antoni Rodríguez-Fornells ◽  
Nico Bunzeck ◽  
Berta Nicolás ◽  
Ruth de Diego-Balaguer ◽  
...  

2016 ◽  
Vol 90 ◽  
pp. 235-242 ◽  
Author(s):  
Mary Pat McAndrews ◽  
Todd A. Girard ◽  
Leanne K. Wilkins ◽  
Cornelia McCormick
Keyword(s):  

2017 ◽  
Vol 37 (2) ◽  
pp. 291-301 ◽  
Author(s):  
Pau A. Packard ◽  
Antoni Rodríguez-Fornells ◽  
Nico Bunzeck ◽  
Berta Nicolás ◽  
Ruth de Diego-Balaguer ◽  
...  

2021 ◽  
Author(s):  
Daria Kvasova ◽  
Travis Stewart ◽  
Salvador Soto-Faraco

In real-world scenes, the different objects and events available to our senses are interconnected within a rich web of semantic associations. These semantic links help parse information and make sense of the environment. For example, during goal-directed attention, characteristic everyday life object sounds help speed up visual search for these objects in natural and dynamic environments. However, it is not known whether semantic correspondences also play a role under spontaneous observation. Here, we investigated this question addressing whether crossmodal semantic congruence can drive spontaneous, overt visual attention in free-viewing conditions. We used eye-tracking whilst participants (N=45) viewed video clips of realistic complex scenes presented alongside sounds of varying semantic congruency with objects within the videos. We found that characteristic sounds increased the probability of looking, the number of fixations, and the total dwell time on the semantically corresponding visual objects, in comparison to when the same scenes were presented with semantically neutral sounds or just with background noise only. Our results suggest that crossmodal semantic congruence has an impact on spontaneous gaze and eye movements, and therefore on how attention samples information in a free viewing paradigm. Our findings extend beyond known effects of object-based crossmodal interactions with simple stimuli and shed new light upon how audio-visual semantically congruent relationships play out in everyday life scenarios.


2019 ◽  
Author(s):  
Daria Kvasova ◽  
Salvador Soto-Faraco

AbstractRecent studies show that cross-modal semantic congruence plays a role in spatial attention orienting and visual search. However, the extent to which these cross-modal semantic relationships attract attention automatically is still unclear. At present the outcomes of different studies have been inconsistent. Variations in task-relevance of the cross-modal stimuli (from explicitly needed, to completely irrelevant) and the amount of perceptual load may account for the mixed results of previous experiments. In the present study, we addressed the effects of audio-visual semantic congruence on visuo-spatial attention across variations in task relevance and perceptual load. We used visual search amongst images of common objects paired with characteristic object sounds (e.g., guitar image and chord sound). We found that audio-visual semantic congruence speeded visual search times when the cross-modal objects are task relevant, or when they are irrelevant but presented under low perceptual load. Instead, when perceptual load is high, sounds fail to attract attention towards the congruent visual images. These results lead us to conclude that object-based crossmodal congruence does not attract attention automatically and requires some top-down processing.


2018 ◽  
Author(s):  
Katharina Antognini ◽  
Moritz M. Daum

Language and action share a common processing system, namely the sensorimotor system. Sensorimotor activity is associated with action prediction and action-verb processing already early during verb acquisition. Action verbs can have a positive effect on action prediction, if the action verb matches the subsequently perceived action. However, it is yet unclear if semantic congruence between the action verb and the action drives this effect, or rather effector-limb congruence (i.e., both the action verb and the action imply an action that is, for instance, performed with the hand). The current study investigated whether semantic congruence between an action verb and an action, compared to semantic incongruence, has different effects on action perception. We presented two-year-olds with sentences comprising action verbs, which either corresponded semantically to a subsequently observed action or not. To assess sensorimotor activity we measured the suppression of the mu and the beta rhythm by means of electroencephalography (EEG). Results are mixed. On the one hand semantic congruence did not affect mu suppression during action perception in toddlers who had all action verbs in their expressive vocabulary. On the other hand, the group of toddlers who did not have all action verbs in their expressive vocabulary did show a difference in mu suppression. In contrast to the mu band, the beta band revealed a power difference during action perception for toddlers who had all action verbs in their expressive vocabulary, but not for the other group.


Sign in / Sign up

Export Citation Format

Share Document