scholarly journals Predictive encoding of moving target trajectory by neurons in the parabigeminal nucleus

2013 ◽  
Vol 109 (8) ◽  
pp. 2029-2043 ◽  
Author(s):  
Rui Ma ◽  
He Cui ◽  
Sang-Hun Lee ◽  
Thomas J. Anastasio ◽  
Joseph G. Malpeli

Intercepting momentarily invisible moving objects requires internally generated estimations of target trajectory. We demonstrate here that the parabigeminal nucleus (PBN) encodes such estimations, combining sensory representations of target location, extrapolated positions of briefly obscured targets, and eye position information. Cui and Malpeli (Cui H, Malpeli JG. J Neurophysiol 89: 3128–3142, 2003) reported that PBN activity for continuously visible tracked targets is determined by retinotopic target position. Here we show that when cats tracked moving, blinking targets the relationship between activity and target position was similar for ON and OFF phases (400 ms for each phase). The dynamic range of activity evoked by virtual targets was 94% of that of real targets for the first 200 ms after target offset and 64% for the next 200 ms. Activity peaked at about the same best target position for both real and virtual targets. PBN encoding of target position takes into account changes in eye position resulting from saccades, even without visual feedback. Since PBN response fields are retinotopically organized, our results suggest that activity foci associated with real and virtual targets at a given target position lie in the same physical location in the PBN, i.e., a retinotopic as well as a rate encoding of virtual-target position. We also confirm that PBN activity is specific to the intended target of a saccade and is predictive of which target will be chosen if two are offered. A Bayesian predictor-corrector model is presented that conceptually explains the differences in the dynamic ranges of PBN neuronal activity evoked during tracking of real and virtual targets.

2007 ◽  
Vol 19 (9) ◽  
pp. 2353-2386 ◽  
Author(s):  
Carlos R. Cassanello ◽  
Vincent P. Ferrera

Saccadic eye movements remain spatially accurate even when the target becomes invisible and the initial eye position is perturbed. The brain accomplishes this in part by remapping the remembered target location in retinal coordinates. The computation that underlies this visual remapping is approximated by vector subtraction: the original saccade vector is updated by subtracting the vector corresponding to the intervening eye movement. The neural mechanism by which vector subtraction is implemented is not fully understood. Here, we investigate vector subtraction within a framework in which eye position and retinal target position signals interact multiplicatively (gain field). When the eyes move, they induce a spatial modulation of the firing rates across a retinotopic map of neurons. The updated saccade metric can be read from the shift of the peak of the population activity across the map. This model uses a quasi-linear (half-rectified) dependence on the eye position and requires the slope of the eye position input to be negatively proportional to the preferred retinal position of each neuron. We derive analytically this constraint and study its range of validity. We discuss how this mechanism relates to experimental results reported in the frontal eye fields of macaque monkeys.


1980 ◽  
Vol 32 (2) ◽  
pp. 307-315 ◽  
Author(s):  
Jennifer A. Mather ◽  
James R. Lackner

Two experiments were performed to evaluate the influence of movement frequency and predictability on visual tracking of the actively and the passively moved hand. Four measures of tracking precision were employed: (a) saccades/cycle, (b) percent of pursuit movement, (c) eye amplitude/arm amplitude, (d) asynchrony of eye and hand at reversal. Active and passive limb movements were tracked with nearly identical accuracy and were always vastly superior to tracking an external visual target undergoing comparable motion. Proprioceptive information about target position appears to provide velocity and position information about target location. Its presence permits the development of central eye-movement programmes that move the eyes in patterns that approximate but do not exactly match, temporally or spatially, the motion of the hand.


Science ◽  
1983 ◽  
Vol 221 (4616) ◽  
pp. 1193-1195 ◽  
Author(s):  
B. Guthrie ◽  
J. Porter ◽  
D. Sparks

Perception ◽  
10.1068/p3440 ◽  
2002 ◽  
Vol 31 (11) ◽  
pp. 1323-1333 ◽  
Author(s):  
Ellen M Berends ◽  
Raymond van Ee ◽  
Casper J Erkelens

It has been well established that vertical disparity is involved in perception of the three-dimensional layout of a visual scene. The goal of this paper was to examine whether vertical disparities can alter perceived direction. We dissociated the common relationship between vertical disparity and the stimulus direction by applying a vertical magnification to the image presented to one eye. We used a staircase paradigm to measure whether perceived straight-ahead depended on the amount of vertical magnification in the stimulus. Subjects judged whether a test dot was flashed to either the left or the right side of straight-ahead. We found that perceived straight-ahead did indeed depend on the amount of vertical magnification but only after subjects adapted (for 5 min) to vertical scale (and only in five out of nine subjects). We argue that vertical disparity is a factor in the calibration of the relationship between eye-position signals and perceived direction.


2018 ◽  
Vol 246 ◽  
pp. 03020
Author(s):  
Tan Wei ◽  
Xuan Liu ◽  
Chen Yi ◽  
Erfu Yang

With the development of industrial automation, location measurement of 3D objects is becoming more and more important, especially as it can provide necessary positional parameters for the manipulator to grasp the object accurately. In view of the disabled object which is in widespread use currently, its image is captured to obtain positional parameters and transmitted to manipulators in industry. The above process is delayed, affecting the work efficiency of the manipulator. A method for calculating the position information of target object in motion is proposed. This method uses monocular vision technology to track 3D moving objects,then uses contour sorting method to extract the minimum constrained contour rectangle, and combines the video alignment technology to realize the tracking. Thus, the measurement error is reduced. The experimental results and analysis show that the adopted measurement method is effective.


2010 ◽  
Vol 21 (01) ◽  
pp. 016-027 ◽  
Author(s):  
Eun Kyung Jeon ◽  
Carolyn J. Brown ◽  
Christine P. Etler ◽  
Sara O'Brien ◽  
Li-Kuei Chiou ◽  
...  

Background: In the mid-1990s, Cochlear Corporation introduced a cochlear implant (CI) to the market that was equipped with hardware that made it possible to record electrically evoked compound action potentials (ECAPs) from CI users of all ages. Over the course of the next decade, many studies were published that compared ECAP thresholds with levels used to program the speech processor of the Nucleus CI. In 2001 Advanced Bionics Corporation introduced the Clarion CII cochlear implant (the Clarion CII internal device is also known as the CII Bionic Ear). This cochlear implant was also equipped with a system that allowed measurement of the ECAP. While a great deal is known about how ECAP thresholds compare with the levels used to program the speech processor of the Nucleus CI, relatively few studies have reported comparisons between ECAP thresholds and the levels used to program the speech processor of the Advanced Bionics CI. Purpose: To explore the relationship between ECAP thresholds and behavioral measures of perceptual dynamic range for the range of stimuli commonly used to program the speech processor of the Advanced Bionics CI. Research Design: This prospective and experimental study uses correlational and descriptive statistics to define the relationship between ECAP thresholds and perceptual dynamic range measures. Study Sample: Twelve postlingually deafened adults participated in this study. All were experienced users of the Advanced Bionics CI system. Data Collection and Analysis: ECAP thresholds were recorded using the commercially available SoundWave software. Perceptual measures of threshold (T-level), most comfortable level (M-level), and maximum comfortable level (C-level) were obtained using both “tone bursts” and “speech bursts.” The relationship between these perceptual and electrophysiological variables was defined using paired t-tests as well as correlation and linear regression. Results: ECAP thresholds were significantly correlated with the perceptual dynamic range measures studied; however, correlations were not strong. Analysis of the individual data revealed considerable discrepancy between the contour of ECAP threshold versus electrode function and the behavioral loudness estimates used for programming. Conclusion: ECAP thresholds recorded from Advanced Bionics cochlear implant users always indicated levels where the programming stimulus was audible for the listener. However, the correlation between ECAP thresholds and M-levels (the primary metric used to program the speech processor of the Advanced Bionics CI), while statistically significant, was quite modest. If programming levels are to be determined on the basis of ECAP thresholds, care should be taken to ensure that stimulation is not uncomfortably loud, particularly on the basal electrodes in the array.


2021 ◽  
Author(s):  
Sophia Shatek ◽  
Amanda K Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson

The ability to perceive moving objects is crucial for survival and threat identification. The association between the ability to move and being alive is learned early in childhood, yet not all moving objects are alive. Natural, non-agentive movement (e.g., clouds, fire) causes confusion in children and adults under time pressure. Recent neuroimaging evidence has shown that the visual system processes objects on a spectrum according to their ability to engage in self-propelled, goal-directed movement. Most prior work has used only moving stimuli that are also animate, so it is difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between movement and aliveness using both behavioural and neural measures. We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Behavioural classification showed two key categorisation biases: moving natural things were often mistaken to be alive, and often classified as not moving. Movement explained significant variance in the neural data, during both a classification task and passive viewing. These results show that capacity for movement is an important dimension in the structure of human visual object representations.


2015 ◽  
Vol 17 (3) ◽  
pp. 135
Author(s):  
Sudibyo Sudibyo

Abstract This study aims to predict the shooting range based on damage the type of lead a projectile without jacket caliber.38 special fired from handguns kinds brand Revolver S & W caliber .38 specials. Based on the phenomenon of criminal cases of abuse handguns types Revolver and the fact that real data it was found that 8% of the amount of lead projectiles without jacket as forensic evidence, the condition has broken the deformed moderate to severe.         The study was conducted at the Police Forensic Laboratory experimental method test-fired in the shooting box at short throw distance range of 0.5 to 6 meters , where the bone is positioned at the target position changes location every 0.5 meters, so the total number of shots is 12 times shot on 12 position target location, and finally obtained 12 variations of deformation projectile shot results.        Stages test firing conducted through three stages as follows: 1). Phase sample preparation equipment and materials firearms, bullets and target bone. 2). Phase shooting target accurately. 3). Stages of deformation measurements and weighing projectile, arranged in the form of table data.        Material samples of bullet used was the type of lead bullets without jacket caliber .38 special with technical specifications diameter of projectile 9.09 mm (real 9.05 mm), length of projectile 17.90 mm (real 18.61 mm), projectile material lead antimony, projectile weight of 10.25 grams, muzzle velocity (initial) 265 m / sec, rounded nose shape, coefficient of form C = 2, the ballistic coefficient i = 0,9 effective range or the distance accurately of 25 meters.        Material samples of bone were used as target is 1694 SR veal ribs with bone hardness values (87 ± 1.5) shore, is used for the calibration test firing, a human skull age adults (≥ 35 years) with a value of hardness (78 ± 6 ) shore, is used as the target subjects of research, human ribs (costal C-3 / C-6) adult (≥ 35 years) with a value of hardness (69 ± 19.5) shore, is used as the target subjects of research. Keywords : deformation; projectiles; bones


Sign in / Sign up

Export Citation Format

Share Document