perceptual feature
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 0)

H-INDEX

8
(FIVE YEARS 0)

2020 ◽  
Vol 177 ◽  
pp. 97-108
Author(s):  
Inci Ayhan ◽  
Melisa Kurtcan ◽  
Lucas Thorpe

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Ronan Chéreau ◽  
Tanika Bawa ◽  
Leon Fodoulian ◽  
Alan Carleton ◽  
Stéphane Pagès ◽  
...  

2020 ◽  
Vol 8 (1) ◽  
pp. 59-74
Author(s):  
Indah Lestari

This is a qualitative research that focuses on the meaning represented in the phoneme contained in English onomatopoeic words. Onomatopoeia is word that imitates the sounds of human, animal, things, actions, and nature in the world. Onomatopoeia exists in many reading materials such as comics, fables, tales and poetry. This research focuses on the onomatopoeic words which are contained in Oxford English Dictionary for the dictionary is regularly updated. Out of two kinds of phoneme, which are consonants and vowels, this research limits the investigation for only English front vowels. Based on the manner of articulation, English front vowels are divided into front high tense unrounded vowel /i/, front high lax unrounded vowel /ɪ/, front mid tense unrounded vowel /e/, front mid lax unrounded vowel /ɛ/, and front low lax unrounded vowel /ӕ/. This approach used in this research is called sound symbolism which is a study of relation between sound and meaning. This research applies low-level properties, a mechanism in sound symbolism that is associating the sound to the meaning based on the shared perceptual feature in both phoneme and associated stimuli. The mechanism is used as the method of the research for the researcher explores the characteristics of front vowels contained in English onomatopoeic words that are used to represent the sounds produced by human, animals, natures, machines, and other things. Based on the investigation, the result indicates that the higher vowels the more diminutive meaning it indicates, while the lower vowels the more augmentative meaning it indicates.


2019 ◽  
Vol 72 (9) ◽  
pp. 2141-2154 ◽  
Author(s):  
Dennis Redlich ◽  
Robert Schnuerch ◽  
Daniel Memmert ◽  
Carina Kreitz

Conscious perception often fails when an object appears unexpectedly and our attention is focused elsewhere (inattentional blindness). Although various factors have been identified that modulate the likelihood of this failure of awareness, it is not clear whether the monetary reward value associated with an object can affect whether or not this object is detected under conditions of inattention. We hypothesised that unexpectedly appearing objects that contain a feature linked to high value, as established via reward learning in a previous task, would subsequently be detected more frequently than objects containing a feature linked to low value. A total of 537 participants first learned the association between a perceptual feature (colour) and subsequent reward values (high, low, or none reward). Afterwards, participants were randomly assigned to a static (Experiment 1) or dynamic (Experiment 2) inattentional blindness task including an unexpected object associated with high, low, or none reward. However, no significant effect of the previously learned value on the subsequent likelihood of detection was observed. We speculate that artificial monetary value, which is known to affect attentional capture, is not strong enough to determine whether or not an object is consciously perceived.


2016 ◽  
Vol 115 (3) ◽  
pp. 1654-1663 ◽  
Author(s):  
Neeraj Kumar ◽  
Pratik K. Mutha

The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.


Sign in / Sign up

Export Citation Format

Share Document