scholarly journals Motor learning reveals the existence of multiple codes for movement planning

2012 ◽  
Vol 108 (10) ◽  
pp. 2708-2716 ◽  
Author(s):  
Todd E. Hudson ◽  
Michael S. Landy

Coordinate systems for movement planning are comprised of an anchor point (e.g., retinocentric coordinates) and a representation (encoding) of the desired movement. One of two representations is often assumed: a final-position code describing desired limb endpoint position and a vector code describing movement direction and extent. The existence of movement-planning systems using both representations is controversial. In our experiments, participants completed reaches grouped by target location (providing practice for a final-position code) and the same reaches grouped by movement vector (providing vector-code practice). Target-grouped reaches resulted in the isotropic (circular) distribution of errors predicted for position-coded reaches. The identical reaches grouped by vector resulted in error ellipses aligned with the reach direction, as predicted for vector-coded reaches. Manipulating only recent movement history to provide better learning for one or the other movement code, we provide definitive evidence that both movement representations are used in the identical task.

2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


1972 ◽  
Vol 94 (4) ◽  
pp. 303-309 ◽  
Author(s):  
D. E. Whitney

The problems of coordinated rate control and position control of multidegree-of-freedom arms are treated together in this paper. A mathematical formulation is presented which allows real time computer-assisted rate control under a variety of external coordinate systems. A new solution to the endpoint position control problem is given, allowing the arm to be driven to a final position specified in meaningful external coordinates without the corresponding final joint angles being known. Attention is given to redundant arms, to the possibility of singularities, and to the relation between this work and dynamic control of arms.


2016 ◽  
Vol 115 (6) ◽  
pp. 3162-3173 ◽  
Author(s):  
Valeria C. Caruso ◽  
Daniel S. Pages ◽  
Marc A. Sommer ◽  
Jennifer M. Groh

Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.


2007 ◽  
Vol 98 (1) ◽  
pp. 537-541 ◽  
Author(s):  
Eliana M. Klier ◽  
Dora E. Angelaki ◽  
Bernhard J. M. Hess

As we move our bodies in space, we often undergo head and body rotations about different axes—yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.


2017 ◽  
Vol 372 (1714) ◽  
pp. 20160106 ◽  
Author(s):  
Anne P. Hillstrom ◽  
Joice D. Segabinazi ◽  
Hayward J. Godwin ◽  
Simon P. Liversedge ◽  
Valerie Benson

We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search. This article is part of the themed issue ‘Auditory and visual scene analysis’.


2017 ◽  
Vol 2017 ◽  
pp. 1-7
Author(s):  
Vivian Farahte Giangiardi ◽  
Sandra Maria Sbeghen F. de Freitas ◽  
Flávia P. de Paiva Silva ◽  
Renata Morales Banjai ◽  
Sandra Regina Alouche

In simple daily activities carried out by the upper limbs, the cerebellum is responsible for the adaptations required for the accurate movement based on previous experiences and external references. This paper aims to characterize the performance of the upper limbs after a cerebellar disease. We evaluated the digital and handgrip strength, dexterity, and function of the upper limbs. The motor performance of the upper limbs was assessed through the use of a digitizing tablet by performing aiming movements with the upper limb most affected by cerebellar disease and the paired limb of the healthy group. The results showed differences between groups: the cerebellar group had higher latency to movement onset, was slower, and presented less smooth trajectories and higher initial direction errors. Moreover, the movement direction influenced the peak velocity and the smoothness for both groups (contralateral directions were slower and less smooth). We concluded that cerebellar disorder leads to movement planning impairment compromising the formulation of an internal model. Alterations on movement execution seem to be a consequence from disruptions in the anticipatory model, leading to more adaptations. These findings are compatible with the roles of the cerebellum on the control of voluntary movement.


Robotica ◽  
2001 ◽  
Vol 19 (4) ◽  
pp. 395-405 ◽  
Author(s):  
Vadim Rogozin ◽  
Yael Edan ◽  
Tamar Flash

This paper presents a real-time algorithm for modifying the trajectory of a manipulator approaching a moving target. The algorithm is based on the superposition scheme; a model developed based on human motion behavior. The algorithm generates a smooth trajectory toward the new target by calculating the vectorial sum between the first trajectory (initial position and first target) and second trajectory (between first and second target location). The algorithm searches for the switch hme that will result in a minimum time trajectory. The idea of the algorithm is to define some domain where the optimal switching time can be found, reduce this domain as much as possible to decrease the number of the points that must be checked and try every remaining candidate in this domain to find numerically the best (optimal) switch time. The algorithm was implemented on an Adept-one robotic system taking into account velocity constraints. The actual velocity profile was found to be less smooth than specified by the mathematical model. When the switch occurs at the middle of the trajectory when the speed is close to its maximum, the change in the movement direction is performed more gently.


Motor Control ◽  
1999 ◽  
Vol 3 (4) ◽  
pp. 414-423 ◽  
Author(s):  
Slobodan Jaric ◽  
Charli Tortoza ◽  
Ismael F.C. Fatarelli ◽  
Gil L. Almeida

A number of studies have analyzed various indices of the final position variability in order to provide insight into different levels of neuromotor processing during reaching movements. Yet the possible effects of movement kinematics on variability have often been neglected. The present study was designed to test the effects of movement direction and curvature on the pattern of movement variable errors. Subjects performed series of reaching movements over the same distance and into the same target. However, due either to changes in starting position or to applied obstacles, the movements were performed in different directions or along the trajectories of different curvatures. The pattern of movement variable errors was assessed by means of the principal component analysis applied on the 2-D scatter of movement final positions. The orientation of these ellipses demonstrated changes associated with changes in both movement direction and curvature. However, neither movement direction nor movement curvature affected movement variable errors assessed by area of the ellipses. Therefore it was concluded that the end-point variability depends partly, but not exclusively, on movement kinematics.


2013 ◽  
Author(s):  
Eva-Maria Kobak ◽  
Simone Cardoso de Oliveira

Based on psychophysical evidence about how learning of visuomotor transformation generalizes, it has been suggested that movements are planned on the basis of movement direction and magnitude, i.e. the vector connecting movement origin and targets. This notion is also known under the term “vectorial planning hypothesis”. Previous psychophysical studies, however, have included separate areas of the workspace for training movements and testing the learning. This study eliminates this confounding factor by investigating the transfer of learning from forward to backward movements in a center-out-and-back task, in which the workspace for both movements is completely identical. Visual feedback allowed for learning only during movements towards the target (forward movements) and not while moving back to the origin (backward movements). When subjects learned the visuomotor rotation in forward movements, initial directional errors in backward movements also decreased to some degree. This learning effect in backward movements occurred predominantly when backward movements featured the same movement directions as the ones trained in forward movements (i.e., when opposite targets were presented). This suggests that learning was transferred in a direction specific way, supporting the notion that movement direction is the most prominent parameter used for motor planning.


Sign in / Sign up

Export Citation Format

Share Document