scholarly journals Sensory feedback and coordinating asymmetrical landing in toads

2016 ◽  
Vol 12 (6) ◽  
pp. 20160196 ◽  
Author(s):  
S. M. Cox ◽  
Gary B. Gillis

Coordinated landing requires anticipating the timing and magnitude of impact, which in turn requires sensory input. To better understand how cane toads, well known for coordinated landing, prioritize visual versus vestibular feedback during hopping, we recorded forelimb joint angle patterns and electromyographic data from five animals hopping under two conditions that were designed to force animals to land with one forelimb well before the other. In one condition, landing asymmetry was due to mid-air rolling, created by an unstable takeoff surface. In this condition, visual, vestibular and proprioceptive information could be used to predict asymmetric landing. In the other, animals took off normally, but landed asymmetrically because of a sloped landing surface. In this condition, sensory feedback provided conflicting information, and only visual feedback could appropriately predict the asymmetrical landing. During the roll treatment, when all sensory feedback could be used to predict an asymmetrical landing, pre-landing forelimb muscle activity and movement began earlier in the limb that landed first. However, no such asymmetries in forelimb preparation were apparent during hops onto sloped landings when only visual information could be used to predict landing asymmetry. These data suggest that toads prioritize vestibular or proprioceptive information over visual feedback to coordinate landing.

2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


Perception ◽  
2005 ◽  
Vol 34 (9) ◽  
pp. 1153-1155 ◽  
Author(s):  
Eric Lewin Altschuler

I have noticed a striking effect that vision can have on movement: when a person makes circular motions with both hands, clockwise with the left hand, counterclockwise with the right hand, while watching the reflection of one hand in a parasagitally placed mirror, if one arm makes a vertical excursion, the other arm tends to make the same vertical excursion, but not typically if the excursing arm is viewed in plain vision. This observation may help in understanding how visual feedback via a mirror may be beneficial for rehabilitation of some patients with movement deficits secondary to certain neurologic conditions, and illustrates that the traditional division of neural processes into sensory input and motor output is somewhat arbitrary.


1977 ◽  
Vol 29 (2) ◽  
pp. 237-244 ◽  
Author(s):  
Jennifer A. Mather ◽  
James R. Lackner

The relative contributions of proprioceptive and efferent information in eliciting adaptation to visual rearrangement were studied under two conditions of visual stimulation. Subjects permitted sight of their forearm under normal room illumination showed significant adaptation when the forearm was (a) moved up and down under the action of tonic vibration reflexes, (b) voluntarily moved through the same trajectory at the same pace, (c) viewed while still, and (d) viewed while the margins of the elbow were vibrated. The reflex movement condition elicited significantly greater adaptation than the other conditions. Subjects allowed only sight of a point source of light attached to their hand showed significant adaptation when the forearm was (a) reflexly moved, (b) voluntarily moved through the same trajectory at the same rate, (c) passively moved, (d) still, and (e) vibrated while still. Less adaptation occurred as the amount of proprioceptive information about limb position was decreased. The adaptation elicited by voluntary movements of the forearm and by reflex movements did not differ significantly. It is concluded that corollary-discharge signals may not be crucial in adaptation to visual rearrangement; a more important factor appears to be discordance between proprioceptive and visual information.


Perception ◽  
2018 ◽  
Vol 47 (8) ◽  
pp. 860-872
Author(s):  
Mounia Ziat ◽  
Min Park ◽  
Brian Kakas ◽  
David A. Rosenbaum

Although people have made clay pots for millennia, little behavioral research has explored how they do so. We were specifically interested in potters’ use of auditory, haptic, and visual feedback. We asked what would happen if one or two of these sources of feedback were removed and potters tried to create pots of a given height, stopping when they thought they had reached that height. We asked students in a pottery class to build simple clay vessels either when they had full sensory feedback (in the control condition for all participants) or when they had reduced input from one modality (in Experiment 1) or two modalities (in Experiment 2). Participants were asked to stop building the vessels when they thought the vessels were 5 in. high. We found that participants produced shorter vessels when one or more forms of sensory feedback was reduced. The degree of shortening did not depend on the type or number of reduced sensory channels. The results are consistent with a control hypothesis where potters must have learned how to use sensory feedback from the modalities to help them control their ceramic creations. The results help highlight the importance of the intimate connections between perception and action.


1999 ◽  
Vol 81 (3) ◽  
pp. 1355-1364 ◽  
Author(s):  
Robert J. van Beers ◽  
Anne C. Sittig ◽  
Jan J. Denier van der Gon

Integration of proprioceptive and visual position-information: an experimentally supported model. To localize one’s hand, i.e., to find out its position with respect to the body, humans may use proprioceptive information or visual information or both. It is still not known how the CNS combines simultaneous proprioceptive and visual information. In this study, we investigate in what position in a horizontal plane a hand is localized on the basis of simultaneous proprioceptive and visual information and compare this to the positions in which it is localized on the basis of proprioception only and vision only. Seated at a table, subjects matched target positions on the table top with their unseen left hand under the table. The experiment consisted of three series. In each of these series, the target positions were presented in three conditions: by vision only, by proprioception only, or by both vision and proprioception. In one of the three series, the visual information was veridical. In the other two, it was modified by prisms that displaced the visual field to the left and to the right, respectively. The results show that the mean of the positions indicated in the condition with both vision and proprioception generally lies off the straight line through the means of the other two conditions. In most cases the mean lies on the side predicted by a model describing the integration of multisensory information. According to this model, the visual information and the proprioceptive information are weighted with direction-dependent weights, the weights being related to the direction-dependent precision of the information in such a way that the available information is used very efficiently. Because the proposed model also can explain the unexpectedly small sizes of the variable errors in the localization of a seen hand that were reported earlier, there is strong evidence to support this model. The results imply that the CNS has knowledge about the direction-dependent precision of the proprioceptive and visual information.


2012 ◽  
Vol 108 (4) ◽  
pp. 1138-1148 ◽  
Author(s):  
J. H. Pasma ◽  
T. A. Boonstra ◽  
S. F. Campfens ◽  
A. C. Schouten ◽  
H. Van der Kooij

To keep balance, information from different sensory systems is integrated to generate corrective torques. Current literature suggests that this information is combined according to the sensory reweighting hypothesis, i.e., more reliable information is weighted more strongly than less reliable information. In this approach, no distinction has been made between the contributions of both legs. In this study, we investigated how proprioceptive information from both legs is combined to maintain upright stance. Healthy subjects maintained balance with eyes closed while proprioceptive information of each leg was perturbed independently by continuous rotations of the support surfaces (SS) and the human body by platform translation. Two conditions were tested: perturbation amplitude of one SS was increased over trials while the other SS 1) did not move or 2) was perturbed with constant amplitude. With the use of system identification techniques, the response of the ankle torques to the perturbation amplitudes (i.e., the torque sensitivity functions) was determined and how much each leg contributed to stabilize stance (i.e., stabilizing mechanisms) was estimated. Increased amplitude of one SS resulted in a decreased torque sensitivity. The torque sensitivity to the constant perturbed SS showed no significant differences. The properties of the stabilizing mechanisms remained constant during perturbations of each SS. This study demonstrates that proprioceptive information from each leg is weighted independently and that the weight decreases with perturbation amplitude. Weighting of proprioceptive information of one leg has no influence on the weight of the proprioceptive information of the other leg. According to the sensory reweighting hypothesis, vestibular information must be up-weighted, because closing the eyes eliminates visual information.


Author(s):  
Lauren Swiney

Over the last thirty years the comparator hypothesis has emerged as a prominent account of inner speech pathology. This chapter discusses a number of cognitive accounts broadly derived from this approach, highlighting the existence of two importantly distinct notions of inner speech in the literature; one as a prediction in the absence of sensory input, the other as an act with sensory consequences that are themselves predicted. Under earlier frameworks in which inner speech is described in the context of classic models of motor control, I argue that these two notions may be compatible, providing two routes to inner speech pathology. Under more recent accounts grounded in the architecture of Bayesian predictive processing, I argue that “active inference” approaches to action generation pose serious challenges to the plausibility of the latter notion of inner speech, while providing the former notion with rich explanatory possibilities for inner speech pathology.


2000 ◽  
Vol 84 (4) ◽  
pp. 1708-1718 ◽  
Author(s):  
Andrew B. Slifkin ◽  
David E. Vaillancourt ◽  
Karl M. Newell

The purpose of the current investigation was to examine the influence of intermittency in visual information processes on intermittency in the control continuous force production. Adult human participants were required to maintain force at, and minimize variability around, a force target over an extended duration (15 s), while the intermittency of on-line visual feedback presentation was varied across conditions. This was accomplished by varying the frequency of successive force-feedback deliveries presented on a video display. As a function of a 128-fold increase in feedback frequency (0.2 to 25.6 Hz), performance quality improved according to hyperbolic functions (e.g., force variability decayed), reaching asymptotic values near the 6.4-Hz feedback frequency level. Thus, the briefest interval over which visual information could be integrated and used to correct errors in motor output was approximately 150 ms. The observed reductions in force variability were correlated with parallel declines in spectral power at about 1 Hz in the frequency profile of force output. In contrast, power at higher frequencies in the force output spectrum were uncorrelated with increases in feedback frequency. Thus, there was a considerable lag between the generation of motor output corrections (1 Hz) and the processing of visual feedback information (6.4 Hz). To reconcile these differences in visual and motor processing times, we proposed a model where error information is accumulated by visual information processes at a maximum frequency of 6.4 per second, and the motor system generates a correction on the basis of the accumulated information at the end of each 1-s interval.


Perception ◽  
1998 ◽  
Vol 27 (1) ◽  
pp. 69-86 ◽  
Author(s):  
Michel-Ange Amorim ◽  
Jack M Loomis ◽  
Sergio S Fukusima

An unfamiliar configuration lying in depth and viewed from a distance is typically seen as foreshortened. The hypothesis motivating this research was that a change in an observer's viewpoint even when the configuration is no longer visible induces an imaginal updating of the internal representation and thus reduces the degree of foreshortening. In experiment 1, observers attempted to reproduce configurations defined by three small glowing balls on a table 2 m distant under conditions of darkness following ‘viewpoint change’ instructions. In one condition, observers reproduced the continuously visible configuration using three other glowing balls on a nearer table while imagining standing at the distant table. In the other condition, observers viewed the configuration, it was then removed, and they walked in darkness to the far table and reproduced the configuration. Even though the observers received no additional information about the stimulus configuration in walking to the table, they were more accurate (less foreshortening) than in the other condition. In experiment 2, observers reproduced distant configurations on a nearer table more accurately when doing so from memory than when doing so while viewing the distant stimulus configuration. In experiment 3, observers performed both the real and imagined perspective change after memorizing the remote configuration. The results of the three experiments indicate that the continued visual presence of the target configuration impedes imaginary perspective-change performance and that an actual change in viewpoint does not increase reproduction accuracy substantially over that obtained with an imagined change in viewpoint.


2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


Sign in / Sign up

Export Citation Format

Share Document