scholarly journals Trained-feature–specific offline learning by sleep in an orientation detection task

2019 ◽  
Vol 19 (12) ◽  
pp. 12 ◽  
Author(s):  
Masako Tamaki ◽  
Zhiyan Wang ◽  
Takeo Watanabe ◽  
Yuka Sasaki
2018 ◽  
Author(s):  
Masako Tamaki ◽  
Zhiyan Wang ◽  
Takeo Watanabe ◽  
Yuka Sasaki

AbstractIt has been suggested that sleep provides additional enhancement of visual perceptual learning (VPL) acquired before sleep, termed offline performance gains. A majority of the studies that found offline performance gains of VPL used discrimination tasks including the texture discrimination task (TDT). This makes it questionable whether offline performance gains on VPL are generalized to other visual tasks. The present study examined whether a Gabor orientation detection task, which is a standard task in VPL, shows offline performance gains. In Experiment 1, we investigated whether sleep leads to offline performance gains on the task. Subjects were trained with the Gabor orientation detection task, and re-tested it after a 12-hr interval that included either nightly sleep or only wakefulness. We found that performance on the task improved to a significantly greater degree after the interval that included sleep and wakefulness than the interval including wakefulness alone. In addition, offline performance gains were specific to the trained orientation. In Experiment 2, we tested whether offline performance gains occur by a nap. Also, we tested whether spontaneous sigma activity in early visual areas during non-rapid eye movement (NREM) sleep, previously implicated in offline performance gains of TDT, was associated with offline performance gains of the task. A different group of subjects had a nap with polysomnography. The subjects were trained with the task before the nap and re-tested after the nap. The performance of the task improved significantly after the nap only on the trained orientation. Sigma activity in the trained region of early visual areas during NREM sleep was significantly larger than in the untrained region, in correlation with offline performance gains. These aspects were also found with VPL of TDT. The results of the present study demonstrate that offline performance gains are not specific to a discrimination task such as TDT, and can be generalized to other forms of VPL tasks, along with trained-feature specificity. Moreover, the present results also suggest that sigma activity in the trained region of early visual areas plays an important role in offline performance gains of VPL of detection as well as discrimination tasks.


PLoS ONE ◽  
2018 ◽  
Vol 13 (7) ◽  
pp. e0201194 ◽  
Author(s):  
Antonio Prieto ◽  
Julia Mayas ◽  
Soledad Ballesteros

2013 ◽  
Vol 13 (9) ◽  
pp. 1258-1258
Author(s):  
T. Busigny ◽  
J. J. Barton ◽  
L. Lanyon ◽  
B. Rossion

2006 ◽  
Vol 27 (4) ◽  
pp. 218-228 ◽  
Author(s):  
Paul Rodway ◽  
Karen Gillies ◽  
Astrid Schepman

This study examined whether individual differences in the vividness of visual imagery influenced performance on a novel long-term change detection task. Participants were presented with a sequence of pictures, with each picture and its title displayed for 17  s, and then presented with changed or unchanged versions of those pictures and asked to detect whether the picture had been changed. Cuing the retrieval of the picture's image, by presenting the picture's title before the arrival of the changed picture, facilitated change detection accuracy. This suggests that the retrieval of the picture's representation immunizes it against overwriting by the arrival of the changed picture. The high and low vividness participants did not differ in overall levels of change detection accuracy. However, in replication of Gur and Hilgard (1975) , high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low vividness participants. The results suggest that vivid images are not characterised by a high level of detail and that vivid imagery enhances memory for the salient aspects of a scene but not all of the details of a scene. Possible causes of this difference, and how they may lead to an understanding of individual differences in change detection, are considered.


Author(s):  
Ana Franco ◽  
Julia Eberlen ◽  
Arnaud Destrebecqz ◽  
Axel Cleeremans ◽  
Julie Bertels

Abstract. The Rapid Serial Visual Presentation procedure is a method widely used in visual perception research. In this paper we propose an adaptation of this method which can be used with auditory material and enables assessment of statistical learning in speech segmentation. Adult participants were exposed to an artificial speech stream composed of statistically defined trisyllabic nonsense words. They were subsequently instructed to perform a detection task in a Rapid Serial Auditory Presentation (RSAP) stream in which they had to detect a syllable in a short speech stream. Results showed that reaction times varied as a function of the statistical predictability of the syllable: second and third syllables of each word were responded to faster than first syllables. This result suggests that the RSAP procedure provides a reliable and sensitive indirect measure of auditory statistical learning.


Author(s):  
Mitchell R. P. LaPointe ◽  
Rachael Cullen ◽  
Bianca Baltaretu ◽  
Melissa Campos ◽  
Natalie Michalski ◽  
...  

2006 ◽  
Author(s):  
Mary T. Dzindolet ◽  
Linda G. Pierce ◽  
Hall P. Beck
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document