scholarly journals Interceptive capturing in large-billed crows: Velocity-dependent weighing of prediction of future target location and visual feedback of current target location

2020 ◽  
Author(s):  
Yusuke Ujihara ◽  
Hiroshi Matsui ◽  
Ei-Ichi Izawa

AbstractInterception of a moving target is a fundamental behaviour of predators and requires tight coupling between the sensory and motor systems. In the literature of foraging studies, feedback mechanisms based on current target position are frequently reported. However, there have also been recent reports of animals employing feedforward mechanisms, in which prediction of future target location plays an important role. In nature, coordination of these two mechanisms may contribute to intercepting evasive prey. However, how animals weigh these two mechanisms remain poorly understood. Here, we conducted a behavioural experiment involving crows (which show flexible sensorimotor coordination in various domains) capturing a moving target. We changed the velocity of the target to examine how the crows utilised prediction of the target location. The analysis of moment-to-moment head movements and computational simulations revealed that the crows used prediction of future target location when the target velocity was high. In contrast, their interception depended on the current momentary position of the target when the target velocity was slow. These results suggest that crows successfully intercept targets by weighing predictive and visual feedback mechanisms, depending on the target velocity.

2004 ◽  
Vol 92 (1) ◽  
pp. 578-590 ◽  
Author(s):  
Simon J. Bennett ◽  
Graham R. Barnes

When a moving target disappears and there is a complete absence of visual feedback signals, eye velocity decays rapidly but often recovers to previous levels if there is an expectation the target will reappear further along its trajectory Given that eye velocity cannot be maintained under such circumstances, the anticipatory recovery may function to minimize the developing velocity error. When there is a change in target velocity during a transient, any recovery should ideally be scaled and hence predictive of the expected target velocity at reappearance. This study confirmed that subjects did not maintain eye velocity close to target velocity for the duration of the inter-stimulus interval (ISI). The majority of subjects exhibited an initial reduction in eye velocity followed by a scaled recovery prior to target reappearance. Eye velocity during the ISI was, therefore, predictive of the expected change in target velocity. These behavioral data were simulated using a model in which gain applied to the visuomotor drive is reduced after the loss of visual feedback and then modulated depending on subject’s expectation regarding the target’s future trajectory.


2019 ◽  
Vol 121 (1) ◽  
pp. 269-284 ◽  
Author(s):  
Florian Perdreau ◽  
James R. H. Cooke ◽  
Mathieu Koppen ◽  
W. Pieter Medendorp

The brain uses self-motion information to internally update egocentric representations of locations of remembered world-fixed visual objects. If a discrepancy is observed between this internal update and reafferent visual feedback, this could be either due to an inaccurate update or because the object has moved during the motion. To optimally infer the object’s location it is therefore critical for the brain to estimate the probabilities of these two causal structures and accordingly integrate and/or segregate the internal and sensory estimates. To test this hypothesis, we designed a spatial updating task involving passive whole body translation. Participants, seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, the reafferent visual feedback was provided by flashing a second target around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally updated target location and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the visual feedback come from a common cause and uses this probability to weigh the two sources of information in mediating spatial constancy across whole body motion. NEW & NOTEWORTHY When we move, egocentric representations of object locations require internal updating to keep them in register with their true world-fixed locations. How does this mechanism interact with reafferent visual input, given that objects typically do not disappear from view? Here we show that the brain implicitly represents the probability that both types of information derive from the same object and uses this probability to weigh their contribution for achieving spatial constancy across whole body motion.


2009 ◽  
Vol 102 (3) ◽  
pp. 1491-1502 ◽  
Author(s):  
John F. Soechting ◽  
John Z. Juveli ◽  
Hrishikesh M. Rao

Intercepting a moving target requires a prediction of the target's future motion. This extrapolation could be achieved using sensed parameters of the target motion, e.g., its position and velocity. However, the accuracy of the prediction would be improved if subjects were also able to incorporate the statistical properties of the target's motion, accumulated as they watched the target move. The present experiments were designed to test for this possibility. Subjects intercepted a target moving on the screen of a computer monitor by sliding their extended finger along the monitor's surface. Along any of the six possible target paths, target speed could be governed by one of three possible rules: constant speed, a power law relation between speed and curvature, or the trajectory resulting from a sum of sinusoids. A go signal was given to initiate interception and was always presented when the target had the same speed, irrespective of the law of motion. The dependence of the initial direction of finger motion on the target's law of motion was examined. This direction did not depend on the speed profile of the target, contrary to the hypothesis. However, finger direction could be well predicted by assuming that target location was extrapolated using target velocity and that the amount of extrapolation depended on the distance from the finger to the target. Subsequent analysis showed that the same model of target motion was also used for on-line, visually mediated corrections of finger movement when the motion was initially misdirected.


1992 ◽  
Vol 2 (1) ◽  
pp. 71-88
Author(s):  
John A. Waterston ◽  
Graham R. Barnes

Recordings of head and eye movement were made during pursuit of mixed-frequency, pseudorandom target motion to study the mechanism of vestibulo-ocular reflex (VOR) suppression during head-free pursuit. When high velocity stimuli were used, slow-phase gaze velocity gains decreased significantly with increases in both absolute target velocity and the velocity ratio between the frequency components. These changes occurred independently of changes in the head displacement gain, which remained relatively constant at the lower frequency and were directly attributable to impaired suppression of the VOR. Similar effects were seen when visual feedback was degraded by tachistoscopic illumination of the target. The results indicate that visual feedback, rather than an efference copy of the head velocity signal, is essential for suppression of slow-phase vestibular eye movement during head-free pursuit. When head-free and head-fixed pursuit were compared, striking similarities were seen for both slow phase gaze velocity gain and phase, indicating that gaze control during smooth pursuit is largely independent of the degree of associated head movement. This suggests that the VOR is not switched off during head-free pursuit. An estimate of the underlying VOR gain was obtained by recording the vestibular response produced by active head movements in darkness. The rather higher estimates of VOR gain obtained using an imaginary earth-fixed target paradigm were found to predict head-free gains more closely than the gains obtained during imaginary pursuit of a moving target, suggesting that such measures may be more representative of the underlying VOR gain.


1996 ◽  
Vol 67 (4) ◽  
pp. 416-423 ◽  
Author(s):  
Heather Carnahan ◽  
Craig Hall ◽  
Timothy D. Lee

Author(s):  
Jessica Schnabel

Mind wandering, or “daydreaming,” is a shift in the contents of a thought away from a task and/or event in the external environment, to self-generated thoughts and feelings. This research seeks to test the reliability of eye tracking as an objective of measure mind wandering using the Wandering Eye Paradigm, as well as examine the relationships between mind wandering and individual characteristics. Fifty participants will be recruited for two appointments a day apart, on each day on each day completing two eye tracking sessions following a moving target. In this task, participants will be instructed to press the space bar if they feel they are mind wandering, and then answer three questions about their episode content. Questionnaires measuring mind wandering, procrastination, mindfulness, creativity and personality (in particular conscientiousness) will be completed between eye tracking sessions. By comparing the eye tracking data in the period prior to the spacebar press we can determine quantifiable indicators of the onset and duration of mind wandering episodes by analyzing gaze location in relation to the target location. It has been hypothesized that severity of task performance failures (losing track of the target) should correlate with the “depth” of the mind wandering episode content. Additionally, we expect the frequency of mind wandering episodes to correlate with individual characteristics, and that these measures will be consistent across trials. This research would provide a novel objective way to identify and measure mind wandering, and would help further advance the understanding of its behavioral and subjective dimensions.


Author(s):  
Ling Guo

For the detection of a moving target position in video monitoring images, the existing locating tracking systems mainly adopt binocular or structured light stereoscopic technology, which has drawbacks such as system design complexity and slow detection speed. In light of these limitations, a tracking method for monocular sequence moving targets is presented, with the introduction of ground constraints into monocular visual monitoring; the principle and process of the method are introduced in detail in this paper. This method uses camera installation information and geometric imaging principles combined with nonlinear compensation to derive the calculation formula for the actual position of the ground moving target in monocular asymmetric nonlinear imaging. The footprint location of a walker is searched in the sequence imaging of a monitoring test platform that is built indoors. Because of the shadow of the walker in the image, the multi-threshold OTSU method based on test target background subtraction is used here to segment the images. The experimental results verify the effectiveness of the proposed method.


2006 ◽  
Vol 16 (1-2) ◽  
pp. 1-22 ◽  
Author(s):  
Junko Fukushima ◽  
Teppei Akao ◽  
Sergei Kurkin ◽  
Chris R.S. Kaneko ◽  
Kikuro Fukushima

In order to see clearly when a target is moving slowly, primates with high acuity foveae use smooth-pursuit and vergence eye movements. The former rotates both eyes in the same direction to track target motion in frontal planes, while the latter rotates left and right eyes in opposite directions to track target motion in depth. Together, these two systems pursue targets precisely and maintain their images on the foveae of both eyes. During head movements, both systems must interact with the vestibular system to minimize slip of the retinal images. The primate frontal cortex contains two pursuit-related areas; the caudal part of the frontal eye fields (FEF) and supplementary eye fields (SEF). Evoked potential studies have demonstrated vestibular projections to both areas and pursuit neurons in both areas respond to vestibular stimulation. The majority of FEF pursuit neurons code parameters of pursuit such as pursuit and vergence eye velocity, gaze velocity, and retinal image motion for target velocity in frontal and depth planes. Moreover, vestibular inputs contribute to the predictive pursuit responses of FEF neurons. In contrast, the majority of SEF pursuit neurons do not code pursuit metrics and many SEF neurons are reported to be active in more complex tasks. These results suggest that FEF- and SEF-pursuit neurons are involved in different aspects of vestibular-pursuit interactions and that eye velocity coding of SEF pursuit neurons is specialized for the task condition.


2020 ◽  
Author(s):  
Samuele Contemori ◽  
Gerald E. Loeb ◽  
Brian D. Corneil ◽  
Guy Wallis ◽  
Timothy J. Carroll

ABSTRACTVolitional visuomotor responses in humans are generally thought to manifest 100ms or more after stimulus onset. Under appropriate conditions, however, much faster target-directed responses can be produced at upper limb and neck muscles. These “express” responses have been termed stimulus-locked responses (SLRs) and are proposed to be modulated by visuomotor transformations performed subcortically via the superior colliculus. Unfortunately, for those interested in studying SLRs, these responses have proven difficult to detect consistently across individuals. The recent report of an effective paradigm for generating SLRs in 100% of participants appears to change this. The task required the interception of a moving target that emerged from behind a barrier at a time consistent with the target velocity. Here we aimed to reproduce the efficacy of this paradigm for eliciting SLRs and to test the hypothesis that its effectiveness derives from the predictability of target onset time as opposed to target motion per se. In one experiment, we recorded surface EMG from shoulder muscles as participants made reaches to intercept temporally predictable or unpredictable targets. Consistent with our hypothesis, predictably timed targets produced more frequent and stronger SLRs than unpredictably timed targets. In a second experiment, we compared different temporally predictable stimuli and observed that transiently presented targets produced larger and earlier SLRs than sustained moving targets. Our results suggest that target motion is not critical for facilitating the expression of an SLR and that timing predictability does not rely on extrapolation of a physically plausible motion trajectory. These findings provide support for a mechanism whereby an internal timer, probably located in cerebral cortex, primes the processing of both visual input and motor output within the superior colliculus to produce SLRs.


Sign in / Sign up

Export Citation Format

Share Document