Eye Movements When Observing Predictable and Unpredictable Actions

2006 ◽  
Vol 96 (3) ◽  
pp. 1358-1369 ◽  
Author(s):  
Gerben Rotman ◽  
Nikolaus F. Troje ◽  
Roland S. Johansson ◽  
J. Randall Flanagan

We previously showed that, when observers watch an actor performing a predictable block-stacking task, the coordination between the observer's gaze and the actor's hand is similar to the coordination between the actor's gaze and hand. Both the observer and the actor direct gaze to forthcoming grasp and block landing sites and shift their gaze to the next grasp or landing site at around the time the hand contacts the block or the block contacts the landing site. Here we compare observers' gaze behavior in a block manipulation task when the observers did and when they did not know, in advance, which of two blocks the actor would pick up first. In both cases, observers managed to fixate the target ahead of the actor's hand and showed proactive gaze behavior. However, these target fixations occurred later, relative to the actor's movement, when observers did not know the target block in advance. In perceptual tests, in which observers watched animations of the actor reaching partway to the target and had to guess which block was the target, we found that the time at which observers were able to correctly do so was very similar to the time at which they would make saccades to the target block. Overall, our results indicate that observers use gaze in a fashion that is appropriate for hand movement planning and control. This in turn suggests that they implement representations of the manual actions required in the task and representations that direct task-specific eye movements.

2003 ◽  
Vol 12 (4) ◽  
pp. 387-410 ◽  
Author(s):  
Douglas A. Reece

We have developed a movement behavior model for soldier agents who populate a virtual battlefield environment. Whereas many simulations have addressed human movement behavior before, none of them has comprehensively addressed realistic military movement at individual and unit levels. To design an appropriate movement behavior model, we found it necessary to elaborate all of the requirements on movement from the military tasks of interest, define a behavior architecture that encompasses all required movement tasks, select appropriate movement planning and control approaches in light of the requirements, and implement the planning and control algorithms with novel enhancements to achieve satisfactory results. The breadth of requirements in this problem domain makes simple behavior architectures inadequate and prevents any single planning approach from easily accomplishing all tasks. In our behavior architecture, a hierarchy of tasks is distributed over unit leaders and unit members. For movement planning, we use an A* search algorithm on a hybrid search space comprising a two-dimensional regular grid and a topological map; the plan produced is a series of waypoints annotated with posture and speed changes. Individuals control movement with reactive steering behaviors. The result is a system that can realistically plan and execute a variety of unit and individual agent movement tasks on a virtual battlefield.


2009 ◽  
Vol 101 (2) ◽  
pp. 1002-1015 ◽  
Author(s):  
Uri Maoz ◽  
Alain Berthoz ◽  
Tamar Flash

One long-established simplifying principle behind the large repertoire and high versatility of human hand movements is the two-thirds power law—an empirical law stating a relationship between local geometry and kinematics of human hand trajectories during planar curved movements. It was further generalized not only to various types of human movements, but also to motion perception and prediction, although it was unsuccessful in explaining unconstrained three-dimensional (3D) movements. Recently, movement obeying the power law was proved to be equivalent to moving with constant planar equi-affine speed. Generalizing such motion to 3D space—i.e., to movement at constant spatial equi-affine speed—predicts the emergence of a new power law, whose utility for describing spatial scribbling movements we have previously demonstrated. In this empirical investigation of the new power law, subjects repetitively traced six different 3D geometrical shapes with their hand. We show that the 3D power law explains the data consistently better than both the two-thirds power law and an additional power law that was previously suggested for spatial hand movements. We also found small yet systematic modifications of the power-law's exponents across the various shapes, which further scrutiny suggested to be correlated with global geometric factors of the traced shape. Nevertheless, averaging over all subjects and shapes, the power-law exponents are generally in accordance with constant spatial equi-affine speed. Taken together, our findings provide evidence for the potential role of non-Euclidean geometry in motion planning and control. Moreover, these results seem to imply a relationship between geometry and kinematics that is more complex than the simple local one stipulated by the two-thirds power law and similar models.


2008 ◽  
Vol 100 (3) ◽  
pp. 1533-1543 ◽  
Author(s):  
J. Randall Flanagan ◽  
Yasuo Terao ◽  
Roland S. Johansson

People naturally direct their gaze to visible hand movement goals. Doing so improves reach accuracy through use of signals related to gaze position and visual feedback of the hand. Here, we studied where people naturally look when acting on remembered target locations. Four targets were presented on a screen, in peripheral vision, while participants fixed a central cross (encoding phase). Four seconds later, participants used a pen to mark the remembered locations while free to look wherever they wished (recall phase). Visual references, including the screen and the cross, were present throughout. During recall, participants neither looked at the marked locations nor prevented eye movements. Instead, gaze behavior was erratic and was comprised of gaze shifts loosely coupled in time and space with hand movements. To examine whether eye and hand movements during encoding affected gaze behavior during recall, in additional encoding conditions, participants marked the visible targets with either free gaze or with central cross fixation or just looked at the targets. All encoding conditions yielded similar erratic gaze behavior during recall. Furthermore, encoding mode did not influence recall performance, suggesting that participants, during recall, did not exploit sensorimotor memories related to hand and gaze movements during encoding. Finally, we recorded a similar lose coupling between hand and eye movements during an object manipulation task performed in darkness after participants had viewed the task environment. We conclude that acting on remembered versus visible targets can engage fundamentally different control strategies, with gaze largely decoupled from movement goals during memory-guided actions.


Sign in / Sign up

Export Citation Format

Share Document