scholarly journals Proprioception is subject-specific and improved without performance feedback

2019 ◽  
Author(s):  
Tianhe Wang ◽  
Ziyan Zhu ◽  
Inoue Kana ◽  
Yuanzheng Yu ◽  
Hao He ◽  
...  

AbstractAccumulating evidence indicates that the human’s proprioception map appears subject-specific. However, whether the idiosyncratic pattern persists across time with good within-subject consistency has not been quantitatively examined. Here we measured the proprioception by a hand visual-matching task in multiple sessions over two days. We found that people improved their proprioception when tested repetitively without performance feedback. Importantly, despite the reduction of average error, the spatial pattern of proprioception errors remained idiosyncratic. Based on individuals’ proprioceptive performance, a standard convolutional neural network classifier could identify people with good accuracy. We also found that subjects’ baseline proprioceptive performance could not predict their motor performance in a visual trajectory-matching task even though both tasks require accurate mapping of hand position to visual targets in the same workspace. Using a separate experiment, we not only replicated these findings but also ruled out the possibility that performance feedback during a few familiarization trials caused the observed improvement in proprioception. We conclude that the conventional proprioception test itself, even without feedback, can improve proprioception but leave the idiosyncrasy of proprioception unchanged.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Tianhe Wang ◽  
Ziyan Zhu ◽  
Inoue Kana ◽  
Yuanzheng Yu ◽  
Hao He ◽  
...  

Abstract Accumulating evidence indicates that the spatial error of human's hand localization appears subject-specific. However, whether the idiosyncratic pattern persists across time with good within-subject consistency has not been adequately examined. Here we measured the hand localization map by a Visual-matching task in multiple sessions over 2 days. Interestingly, we found that participants improved their hand localization accuracy when tested repetitively without performance feedback. Importantly, despite the reduction of average error, the spatial pattern of hand localization errors remained idiosyncratic. Based on individuals' hand localization performance, a standard convolutional neural network classifier could identify participants with good accuracy. Moreover, we did not find supporting evidence that participants' baseline hand localization performance could predict their motor performance in a visual Trajectory-matching task even though both tasks require accurate mapping of hand position to visual targets in the same workspace. Using a separate experiment, we not only replicated these findings but also ruled out the possibility that performance feedback during a few familiarization trials caused the observed improvement in hand localization. We conclude that the conventional hand localization test itself, even without feedback, can improve hand localization but leave the idiosyncrasy of hand localization map unchanged.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Bo Dong ◽  
Airui Chen ◽  
Yuting Zhang ◽  
Yangyang Zhang ◽  
Ming Zhang ◽  
...  

AbstractInaccurate egocentric distance and speed perception are two main explanations for the high accident rate associated with driving in foggy weather. The effect of foggy weather on speed has been well studied. However, its effect on egocentric distance perception is poorly understood. The paradigm for measuring perceived egocentric distance in previous studies was verbal estimation instead of a nonverbal paradigm. In the current research, a nonverbal paradigm, the visual matching task, was used. Our results from the nonverbal task revealed a robust foggy effect on egocentric distance. Observers overestimated the egocentric distance in foggy weather compared to in clear weather. The higher the concentration of fog, the more serious the overestimation. This effect of fog on egocentric distance was not limited to a certain distance range but was maintained in action space and vista space. Our findings confirm the foggy effect with a nonverbal paradigm and reveal that people may perceive egocentric distance more "accurately" in foggy weather than when it is measured with a verbal estimation task.


2019 ◽  
Vol 11 (4) ◽  
pp. 474-482
Author(s):  
Kristina Howansky ◽  
Analia Albuja ◽  
Shana Cole

In four studies, we explored perceptual representations of the gender-typicality of transgender individuals. In Studies 1a and 1b, participants ( N = 237) created an avatar based on an image of an individual who disclosed being transgender or did not. Avatars generated in the transgender condition were less gender-typical—that is, transmen were less masculine and transwomen were less feminine—than those created in the control condition. In Study 2 ( N = 368), using a unique visual matching task, participants represented a target labeled transgender as less gender-typical than the same target labeled cisgender. In Study 3 ( N = 228), perceptual representations of transwomen as less gender-typical led to lower acceptability of feminine behavior and less endorsement that the target should be categorized as female. We discuss how biased perceptual representations may contribute to the stigmatization and marginalization of transgender individuals.


2019 ◽  
Vol 24 ◽  
pp. 102036 ◽  
Author(s):  
Mario Widmer ◽  
Kai Lutz ◽  
Andreas R. Luft

1981 ◽  
Vol 4 (1) ◽  
pp. 38-43 ◽  
Author(s):  
Jed P. Luchow ◽  
Margaret Jo Shepherd

The purpose of this study was to examine the effect of multisensory input on the performance of learning disabled boys on a visual matching task. A thirty-item multiple-choice visual dot pattern matching task was given to 160 boys, ages 6 years through 8 years, 11 months, who were enrolled in special classes for children with learning problems. Of the four treatment groups (visual input only, visual plus tactile input, visual plus auditory input, visual plus auditory plus tactile input), the difference between the means of the visual only and visual-auditory and visual-auditory-tactile groups was significant at p<.05. The results suggest that on a perceptual task not related to reading or mathematics, the addition of input from tactile and auditory sensory modalities does not improve learning performance and, in certain combinations, actually interferes with such performance.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Guilhem Ibos ◽  
David J Freedman

Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for).


1986 ◽  
Vol 38 (2) ◽  
pp. 229-247 ◽  
Author(s):  
Robert H. Logie

This paper reports four experiments designed to develop a simple technique for the study of visuo-spatial processing within the working memory framework (Baddeley and Hitch, 1974). Experiment 1 involved the matching of successively presented random matrix patterns, as a secondary visual suppression task. This was coupled with rote rehearsal or a visual imagery mnemonic for learning lists of concrete words presented auditorily. Although memory performance with matching dropped overall, the visual mnemonic was differentially affected. Experiment 2 removed the matching decision, with visual presentation of unattended patterns. There was no overall effect of the unattended material, but use of the visual mnemonic was significantly affected. Experiment 3 replicated this result with simpler plain coloured squares as the unattended material. In Experiment 4, for one group, the unattended material consisted of line drawings of common objects. For a second group, the lists of words for recall were presented visually, with or without unattended speech. The results suggested that unattended pictures disrupt use of a visual mnemonic, while unattended speech disrupts rote rehearsal. These results suggest that unattended visual material has privileged access to the mechanism(s) involved in short-term visuo-spatial processing and storage. They also suggest that use of a concurrent visual matching task or of unattended visual material may provide tractable techniques for investigating this aspect of cognitive function within the context of working memory.


1979 ◽  
Vol 135 (2) ◽  
pp. 165-174 ◽  
Author(s):  
Patricia A. Rupert ◽  
Raymond Baird

Sign in / Sign up

Export Citation Format

Share Document