repetition detection
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Trevor Agus ◽  
Daniel Pressnitzer

Stochastic sounds are useful to probe auditory memory, as they require listeners to learn unpredictable and novel patterns under controlled experimental conditions. Previous studies using white noise or random click trains have demonstrated rapid auditory learning for instances of such a class of sounds. Here, we tested stochastic sounds that enabled parametrical control of spectrotemporal complexity: tone clouds. Tone clouds were defined as broadband combinations of tone pips at randomized frequencies and onset times. Varying the density of tones covered a perceptual range from random melodies to noise. Results showed that listeners could detect repeating patterns in tone clouds at all tested densities, with sparse tone clouds being the easiest. A model estimating amplitude modulation within cochlear filters showed that repetition detection was correlated with the amount of amplitude modulation at lower rates. Rapid learning of individual tone clouds was also observed, again for all densities. Tone clouds thus provide a tool to probe auditory learning in a variety of task-difficulty settings, which could be useful for clinical or neurophysiological studies. They also show that rapid auditory learning operates over the full range of spectrotemporal complexity typical of natural sounds, essentially from melodies to noise.


Author(s):  
Yen Na Yum ◽  
Sam-Po Law

Abstract The literature has mixed reports on whether the N170, an early visual ERP response to words, signifies orthographic and/or phonological processing, and whether these effects are moderated by script and language expertise. In this study, native Chinese readers, Japanese–Chinese, and Korean–Chinese bilingual readers performed a one-back repetition detection task with single Chinese characters that differed in phonological regularity status. Results using linear mixed effects models showed that Korean–Chinese readers had bilateral N170 response, while native Chinese and Japanese–Chinese groups had left-lateralized N170, with stronger left lateralization in native Chinese than Japanese–Chinese readers. Additionally, across groups, irregular characters had bilateral increase in N170 amplitudes compared to regular characters. These results suggested that visual familiarity to a script rather than orthography-phonology mapping determined the left lateralization of the N170 response, while there was automatic access to sublexical phonology in the N170 time window in native and non-native readers alike.


Author(s):  
R. Grompone von Gioi ◽  
C. Hessel ◽  
T. Dagobert ◽  
J. M. Morel ◽  
C. de Franchis

Abstract. Assessing ground visibility is a crucial step in automatic satellite image analysis. Nevertheless, several recent Earth observation satellite constellations lack specially designed spectral bands and use a frame camera, precluding spectrum-based and parallax-based cloud detection methods. An alternative approach is to detect the parts of each image where the ground is visible. This can be done by comparing locally pairs of registered images in a temporal series: matching regions are necessarily cloud free. Indeed, the ground has persistent patterns that can be observed repetitively in the time series while the appearance of clouds changes at each date. To detect reliably the “visible” ground, we propose here an a contrario local image matching method coupled with an efficient greedy algorithm.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 150072-150081
Author(s):  
Wen Hao ◽  
Wei Liang ◽  
Yinghui Wang ◽  
Minghua Zhao ◽  
Ye Li

2018 ◽  
Author(s):  
Samuel A. Nastase ◽  
Yaroslav O. Halchenko ◽  
Andrew C. Connolly ◽  
M. Ida Gobbini ◽  
James V. Haxby

Neuroimaging studies of object and action representation often use controlled stimuli and implicitly assume that the relevant neural representational spaces are fixed and context-invariant. Here we present functional MRI data measured while participants freely viewed brief naturalistic video clips of behaving animals in two different task contexts. Participants performed a 1-back category repetition detection task requiring them to attend to either animal taxonomy or animal behavior. The data and analysis code are freely available, and have been curated according to the Brain Imaging Data Structure (BIDS) standard. We thoroughly describe the data, provide quality control metrics, and perform a searchlight classification analysis to demonstrate the potential utility of the data. These data are intended to provide a test bed for investigating how task demands alter the neural representation of complex stimuli and their semantic qualities.


2018 ◽  
Vol 273 ◽  
pp. 435-447 ◽  
Author(s):  
Hongfei Xiao ◽  
Gaofeng Meng ◽  
Lingfeng Wang ◽  
Chunhong Pan
Keyword(s):  

2017 ◽  
Vol 42 (3) ◽  
pp. 333-341 ◽  
Author(s):  
Silvia Brem ◽  
Eliane Hunkeler ◽  
Markus Mächler ◽  
Jens Kronschnabel ◽  
Iliana Irini Karipidis ◽  
...  

Neural tuning to print develops when children learn to read and is reflected by a more pronounced left occipito-temporal negativity to orthographic stimuli as compared to non-orthographic false fonts or symbols after around 150–250 ms in their N1, a visual event-related potential (ERP). In adults, initial expertise for a novel script can emerge in less than 2 hours through repeated exposure or training. Here, we aimed to assess changes in the visual N1 related to the process of learning associations between unknown written characters and familiar, spoken syllables or words. Thirty-two healthy literate adults learned to associate a set of foreign characters with either syllables or German words within a single experimental session. EEG was recorded during a visual one-back character repetition detection task in which trained characters, untrained characters and familiar letters were presented before and after the training. A bilateral occipito-temporal increase in the N1 negativity with training was only found for the newly learned characters, but not for the control characters. In conclusion, the present data indicate that expertise to novel characters can be induced by a short character–sound association training and is reflected by a bilateral modulation of the visual N1 amplitude. However, no differentiation was found regarding the comparison of word or syllable training, indicating that the visual N1 most likely reflects gaining expertise driven by phonological associations common to both training types.


2017 ◽  
Author(s):  
Melissa Le-Hoa Võ ◽  
Zoya Bylinskii ◽  
Aude Oliva

ABSTRACTSome images stick in your mind for days or even years, while others seem to vanish quickly from memory. We tested whether differences in image memorability 1) are already evident immediately after encoding, 2) produce different rates of forgetting, and 3) whether the retrieval of images with different memorability scores affords differential degrees of cognitive load, mirrored by graded pupillary and blink rate responses. We monitored participants’ eye activity while they viewed a sequence of >1200 images in a repetition detection paradigm. 240 target images from 3 non-overlapping memorability classes were repeated at 4 different lags. Overall, performance decreased log-linearly with time. Differences in memorability were already visible at the shortest lag (~ 20 sec) and became more pronounced as time passed on. The rate of forgetting was steepest for low memorable images. We found that pupils dilated significantly more to correctly identified targets than correctly rejected distractors. Importantly, this “pupil old/new effect” increased with increasing number of lags and decreasing image memorability. A similar modulation of blink rates corroborated these findings. These results suggest that during memory retrieval of scenes, image inherent characteristics pose differential degrees of cognitive load on observers as seen in their eyes.


Sign in / Sign up

Export Citation Format

Share Document