scholarly journals Roles of consonant status and sonority in printed syllable processing: Evidence from illusory conjunction and audio-visual recognition tasks in French adults.

2008 ◽  
Author(s):  
Norbert Maïonchi-Pino ◽  
Bruno De Cara ◽  
Annie Magnan ◽  
Jean Ecalle
2005 ◽  
Vol 19 (3) ◽  
pp. 216-231 ◽  
Author(s):  
Albertus A. Wijers ◽  
Maarten A.S. Boksem

Abstract. We recorded event-related potentials in an illusory conjunction task, in which subjects were cued on each trial to search for a particular colored letter in a subsequently presented test array, consisting of three different letters in three different colors. In a proportion of trials the target letter was present and in other trials none of the relevant features were present. In still other trials one of the features (color or letter identity) were present or both features were present but not combined in the same display element. When relevant features were present this resulted in an early posterior selection negativity (SN) and a frontal selection positivity (FSP). When a target was presented, this resulted in a FSP that was enhanced after 250 ms as compared to when both relevant features were present but not combined in the same display element. This suggests that this effect reflects an extra process of attending to both features bound to the same object. There were no differences between the ERPs in feature error and conjunction error trials, contrary to the idea that these two types of errors are due to different (perceptual and attentional) mechanisms. The P300 in conjunction error trials was much reduced relative to the P300 in correct target detection trials. A similar, error-related negativity-like component was visible in the response-locked averages in correct target detection trials, in feature error trials, and in conjunction error trials. Dipole modeling of this component resulted in a source in a deep medial-frontal location. These results suggested that this type of task induces a high level of response conflict, in which decision-related processes may play a major role.


Author(s):  
Nicolas Poirel ◽  
Claire Sara Krakowski ◽  
Sabrina Sayah ◽  
Arlette Pineau ◽  
Olivier Houdé ◽  
...  

The visual environment consists of global structures (e.g., a forest) made up of local parts (e.g., trees). When compound stimuli are presented (e.g., large global letters composed of arrangements of small local letters), the global unattended information slows responses to local targets. Using a negative priming paradigm, we investigated whether inhibition is required to process hierarchical stimuli when information at the local level is in conflict with the one at the global level. The results show that when local and global information is in conflict, global information must be inhibited to process local information, but that the reverse is not true. This finding has potential direct implications for brain models of visual recognition, by suggesting that when local information is conflicting with global information, inhibitory control reduces feedback activity from global information (e.g., inhibits the forest) which allows the visual system to process local information (e.g., to focus attention on a particular tree).


Author(s):  
Dean E. Stolldorf ◽  
Gordon M. Redding ◽  
Leon M. Manelis
Keyword(s):  

2015 ◽  
Author(s):  
Raghuraman Gopalan ◽  
Ruonan Li ◽  
Vishal M. Patel ◽  
Rama Chellappa

2020 ◽  
Author(s):  
Bahareh Jozranjbar ◽  
Arni Kristjansson ◽  
Heida Maria Sigurdardottir

While dyslexia is typically described as a phonological deficit, recent evidence suggests that ventral stream regions, important for visual categorization and object recognition, are hypoactive in dyslexic readers who might accordingly show visual recognition deficits. By manipulating featural and configural information of faces and houses, we investigated whether dyslexic readers are disadvantaged at recognizing certain object classes or utilizing particular visual processing mechanisms. Dyslexic readers found it harder to recognize objects (houses), suggesting that visual problems in dyslexia are not completely domain-specific. Mean accuracy for faces was equivalent in the two groups, compatible with domain-specificity in face processing. While face recognition abilities correlated with reading ability, lower house accuracy was nonetheless related to reading difficulties even when accuracy for faces was kept constant, suggesting a specific relationship between visual word recognition and the recognition of non-face objects. Representational similarity analyses (RSA) revealed that featural and configural processes were clearly separable in typical readers, while dyslexic readers appeared to rely on a single process. This occurred for both faces and houses and was not restricted to particular visual categories. We speculate that reading deficits in some dyslexic readers reflect their reliance on a single process for object recognition.


Author(s):  
Li Liu ◽  
Thomas Hueber ◽  
Gang Feng ◽  
Denis Beautemps

2019 ◽  
Vol 33 (3) ◽  
pp. 89-109 ◽  
Author(s):  
Ting (Sophia) Sun

SYNOPSIS This paper aims to promote the application of deep learning to audit procedures by illustrating how the capabilities of deep learning for text understanding, speech recognition, visual recognition, and structured data analysis fit into the audit environment. Based on these four capabilities, deep learning serves two major functions in supporting audit decision making: information identification and judgment support. The paper proposes a framework for applying these two deep learning functions to a variety of audit procedures in different audit phases. An audit data warehouse of historical data can be used to construct prediction models, providing suggested actions for various audit procedures. The data warehouse will be updated and enriched with new data instances through the application of deep learning and a human auditor's corrections. Finally, the paper discusses the challenges faced by the accounting profession, regulators, and educators when it comes to applying deep learning.


Sign in / Sign up

Export Citation Format

Share Document