scholarly journals Trained-feature specific offline learning in an orientation detection task

2018 ◽  
Author(s):  
Masako Tamaki ◽  
Zhiyan Wang ◽  
Takeo Watanabe ◽  
Yuka Sasaki

AbstractIt has been suggested that sleep provides additional enhancement of visual perceptual learning (VPL) acquired before sleep, termed offline performance gains. A majority of the studies that found offline performance gains of VPL used discrimination tasks including the texture discrimination task (TDT). This makes it questionable whether offline performance gains on VPL are generalized to other visual tasks. The present study examined whether a Gabor orientation detection task, which is a standard task in VPL, shows offline performance gains. In Experiment 1, we investigated whether sleep leads to offline performance gains on the task. Subjects were trained with the Gabor orientation detection task, and re-tested it after a 12-hr interval that included either nightly sleep or only wakefulness. We found that performance on the task improved to a significantly greater degree after the interval that included sleep and wakefulness than the interval including wakefulness alone. In addition, offline performance gains were specific to the trained orientation. In Experiment 2, we tested whether offline performance gains occur by a nap. Also, we tested whether spontaneous sigma activity in early visual areas during non-rapid eye movement (NREM) sleep, previously implicated in offline performance gains of TDT, was associated with offline performance gains of the task. A different group of subjects had a nap with polysomnography. The subjects were trained with the task before the nap and re-tested after the nap. The performance of the task improved significantly after the nap only on the trained orientation. Sigma activity in the trained region of early visual areas during NREM sleep was significantly larger than in the untrained region, in correlation with offline performance gains. These aspects were also found with VPL of TDT. The results of the present study demonstrate that offline performance gains are not specific to a discrimination task such as TDT, and can be generalized to other forms of VPL tasks, along with trained-feature specificity. Moreover, the present results also suggest that sigma activity in the trained region of early visual areas plays an important role in offline performance gains of VPL of detection as well as discrimination tasks.

2019 ◽  
Author(s):  
Masako Tamaki ◽  
Aaron V. Berard ◽  
Tyler Barnes-Diana ◽  
Jesse Siegel ◽  
Takeo Watanabe ◽  
...  

ABSTRACTA growing body of evidence indicates that visual perceptual learning (VPL) is enhanced by reward provided during training. Another line of studies has shown that sleep following training also plays a role in facilitating VPL, an effect known as the offline performance gain of VPL. However, whether the effects of reward and sleep interact on VPL remains unclear. Here, we show that reward interacts with sleep to facilitate offline performance gains of VPL. First, we demonstrated a significantly larger offline performance gain over a 12-h interval including sleep in a reward group than that in a No-reward group. However, the offline performance gains over the 12-h interval without sleep were not significantly different with or without reward during training, indicating a crucial interaction between reward and sleep in VPL. Next, we tested whether neural activations during posttraining sleep were modulated after reward was provided during training. Reward provided during training enhanced REM sleep time, increased oscillatory activities for reward processing in the prefrontal region during REM sleep, and inhibited neural activation in the untrained region in early visual areas in NREM and REM sleep. The offline performance gains were significantly correlated with oscillatory activities of visual processing during NREM sleep and reward processing during REM sleep in the reward group but not in the No-reward group. These results suggest that reward provided during training becomes effective during sleep, with excited reward processing sending inhibitory signals to suppress noise in visual processing, resulting in larger offline performance gains over sleep.Significance statementIndependent lines of research have shown that visual perceptual learning (VPL) is improved by reward or sleep. Here, we show that reward provided during training increased offline performance gains of VPL over sleep. Moreover, during posttraining sleep, reward was associated with longer REM sleep, increased activity in reward processing in the prefrontal region during REM sleep, and decreased activity in the untrained region of early visual areas during NREM and REM sleep. Offline performance gains were correlated with modulated oscillatory activity in reward processing during REM sleep and visual processing during NREM sleep. These results suggest that reward provided during training becomes effective on VPL through the interaction between reward and visual processing during sleep after training.


2020 ◽  
Author(s):  
Masako Tamaki ◽  
Yuka Sasaki

SummaryAre the sleep-dependent offline performance gains of visual perceptual learning (VPL) consistent with a use-dependent or learning-dependent model? Here, we found that a use-dependent model is inconsistent with the offline performance gains in VPL. In two training conditions with matched visual usages, one generated VPL (learning condition), while the other did not (interference condition). The use-dependent model predicts that slow-wave activity (SWA) during posttraining NREM sleep in the trained region increases in both conditions, in correlation with offline performance gains. However, compared with those in the interference condition, sigma activity, not SWA, during NREM sleep and theta activity during REM sleep, source-localized to the trained early visual areas, increased in the learning condition. Sigma activity correlated with offline performance gain. These significant differences in spontaneous activity between the conditions suggest that there is a learning-dependent process during posttraining sleep for the offline performance gains in VPL.


2019 ◽  
Author(s):  
Masako Tamaki ◽  
Zhiyan Wang ◽  
Tyler Barnes-Diana ◽  
Aaron V. Berard ◽  
Edward Walsh ◽  
...  

AbstractSleep is beneficial for learning. However, whether NREM or REM sleep facilitates learning, whether the learning facilitation results from plasticity increases or stabilization and whether the facilitation results from learning-specific processing are all controversial. Here, after training on a visual task we measured the excitatory and inhibitory neurochemical (E/I) balance, an index of plasticity in human visual areas, for the first time, while subjects slept. Off-line performance gains of presleep learning were associated with the E/I balance increase during NREM sleep, which also occurred without presleep training. In contrast, increased stabilization was associated with decreased E/I balance during REM sleep only after presleep training. These indicate that the above-mentioned issues are not matters of controversy but reflect opposite neurochemical processing for different roles in learning during different sleep stages: NREM sleep increases plasticity leading to performance gains independently of learning, while REM sleep decreases plasticity to stabilize learning in a learning-specific manner.


2019 ◽  
Vol 19 (12) ◽  
pp. 12 ◽  
Author(s):  
Masako Tamaki ◽  
Zhiyan Wang ◽  
Takeo Watanabe ◽  
Yuka Sasaki

2021 ◽  
pp. 147715352110026
Author(s):  
Y Mao ◽  
S Fotios

Obstacle detection and facial emotion recognition are two critical visual tasks for pedestrians. In previous studies, the effect of changes in lighting was tested for these as individual tasks, where the task to be performed next in a sequence was known. In natural situations, a pedestrian is required to attend to multiple tasks, perhaps simultaneously, or at least does not know which of several possible tasks would next require their attention. This multi-tasking might impair performance on any one task and affect evaluation of optimal lighting conditions. In two experiments, obstacle detection and facial emotion recognition tasks were performed in parallel under different illuminances. Comparison of these results with previous studies, where these same tasks were performed individually, suggests that multi-tasking impaired performance on the peripheral detection task but not the on-axis facial emotion recognition task.


SLEEP ◽  
2017 ◽  
Vol 40 (suppl_1) ◽  
pp. A85-A85 ◽  
Author(s):  
M Tamaki ◽  
T Watanabe ◽  
Y Sasaki

2021 ◽  
Vol 15 ◽  
Author(s):  
Chi Zhang ◽  
Xiao-Han Duan ◽  
Lin-Yuan Wang ◽  
Yong-Li Li ◽  
Bin Yan ◽  
...  

Despite the remarkable similarities between convolutional neural networks (CNN) and the human brain, CNNs still fall behind humans in many visual tasks, indicating that there still exist considerable differences between the two systems. Here, we leverage adversarial noise (AN) and adversarial interference (AI) images to quantify the consistency between neural representations and perceptual outcomes in the two systems. Humans can successfully recognize AI images as the same categories as their corresponding regular images but perceive AN images as meaningless noise. In contrast, CNNs can recognize AN images similar as corresponding regular images but classify AI images into wrong categories with surprisingly high confidence. We use functional magnetic resonance imaging to measure brain activity evoked by regular and adversarial images in the human brain, and compare it to the activity of artificial neurons in a prototypical CNN—AlexNet. In the human brain, we find that the representational similarity between regular and adversarial images largely echoes their perceptual similarity in all early visual areas. In AlexNet, however, the neural representations of adversarial images are inconsistent with network outputs in all intermediate processing layers, providing no neural foundations for the similarities at the perceptual level. Furthermore, we show that voxel-encoding models trained on regular images can successfully generalize to the neural responses to AI images but not AN images. These remarkable differences between the human brain and AlexNet in representation-perception association suggest that future CNNs should emulate both behavior and the internal neural presentations of the human brain.


2021 ◽  
Author(s):  
Colin R McCormick ◽  
Ralph S. Redden ◽  
Raymond M Klein

Temporal attention is a cognitive mechanism that allows individuals to prepare to respond to ananticipated event. Lawrence and Klein (2013) distinguished two forms of temporal attention: oneelicited by purely endogenous alerting mechanisms, and one elicited through exogenous alertingmechanisms. Recently, McCormick et al. displayed that these mechanisms generate additiveeffects on reaction time, however more informative speed and accuracy comparisons were notpossible due to them being measured during a detection task. The current pair of experimentslooks to compare these two forms of temporal attention in a discrimination task while measuringboth speed and accuracy, by inducing methodological modifications that lower task demand.These manipulations were successful, as temporal cueing effects were observed for both thecombined form and the less-studied purely endogenous form. However, speed-accuracyperformance for these two forms of temporal attention did not align with our predictions basedon Lawrence and Klein (2013), leading us to speculate on the generalizability of their results.


2020 ◽  
Author(s):  
Zhiyan Wang ◽  
Masako Tamaki ◽  
Kazuhisa Shibata ◽  
Michael S. Worden ◽  
Takashi Yamada ◽  
...  

AbstractWhile numerous studies have shown that visual perceptual learning (VPL) occurs as a result of exposure to a visual feature in a task-irrelevant manner, the underlying neural mechanism is poorly understood. In a previous psychophysical study, subjects were repeatedly exposed to a task-irrelevant global motion display that induced the perception of not only the local motions but also a global motion moving in the direction of the spatiotemporal average of the local motion vectors. As a result, subjects enhanced their sensitivity only to the local moving directions, suggesting that early visual areas (V1/V2) that process local motions are involved in task-irrelevant VPL. However, this hypothesis has never been examined by directly examining the involvement of early visual areas (V1/V2). Here, we employed a decoded neurofeedback technique (DecNef) using functional magnetic resonance imaging. During the DecNef training, subjects were trained to induce the activity patterns in V1/V2 that were similar to those evoked by the actual presentation of the global motion display. The DecNef training was conducted with neither the actual presentation of the display nor the subjects’ awareness of the purpose of the experiment. As a result, subjects increased the sensitivity to the local motion directions but not specifically to the global motion direction. The training effect was strictly confined to V1/V2. Moreover, subjects reported that they neither perceived nor imagined any motion during the DecNef training. These results together suggest that that V1/V2 are sufficient for exposure-based task-irrelevant VPL to occur unconsciously.Significance StatementWhile numerous studies have shown that visual perceptual learning (VPL) occurs as a result of exposure to a visual feature in a task-irrelevant manner, the underlying neural mechanism is poorly understood. Previous psychophysical experiments suggest that early visual areas (V1/V2) are involved in task-irrelevant VPL. However, this hypothesis has never been examined by directly examining the involvement of early visual areas (V1/V2). Here, using decoded fMRI neurofeedback, the activity patterns similar to those evoked by the presentation of a complex motion display were repeatedly induced only in early visual areas. The training sensitized only the local motion directions and not the global motion direction, suggesting that V1/V2 are involved in task-irrelevant VPL.


Sign in / Sign up

Export Citation Format

Share Document