Visually guided stepping under conditions of step cycle-related denial of visual information

1996 ◽  
Vol 109 (2) ◽  
Author(s):  
M.A. Hollands ◽  
D.E. Marple-Horvat
2017 ◽  
Vol 372 (1717) ◽  
pp. 20160077 ◽  
Author(s):  
Anna Honkanen ◽  
Esa-Ville Immonen ◽  
Iikka Salmela ◽  
Kyösti Heimonen ◽  
Matti Weckström

Night vision is ultimately about extracting information from a noisy visual input. Several species of nocturnal insects exhibit complex visually guided behaviour in conditions where most animals are practically blind. The compound eyes of nocturnal insects produce strong responses to single photons and process them into meaningful neural signals, which are amplified by specialized neuroanatomical structures. While a lot is known about the light responses and the anatomical structures that promote pooling of responses to increase sensitivity, there is still a dearth of knowledge on the physiology of night vision. Retinal photoreceptors form the first bottleneck for the transfer of visual information. In this review, we cover the basics of what is known about physiological adaptations of insect photoreceptors for low-light vision. We will also discuss major enigmas of some of the functional properties of nocturnal photoreceptors, and describe recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals. This article is part of the themed issue ‘Vision in dim light’.


Author(s):  
Daniel Tomsic ◽  
Julieta Sztarker

Decapod crustaceans, in particular semiterrestrial crabs, are highly visual animals that greatly rely on visual information. Their responsiveness to visual moving stimuli, with behavioral displays that can be easily and reliably elicited in the laboratory, together with their sturdiness for experimental manipulation and the accessibility of their nervous system for intracellular electrophysiological recordings in the intact animal, make decapod crustaceans excellent experimental subjects for investigating the neurobiology of visually guided behaviors. Investigations of crustaceans have elucidated the general structure of their eyes and some of their specializations, the anatomical organization of the main brain areas involved in visual processing and their retinotopic mapping of visual space, and the morphology, physiology, and stimulus feature preferences of a number of well-identified classes of neurons, with emphasis on motion-sensitive elements. This anatomical and physiological knowledge, in connection with results of behavioral experiments in the laboratory and the field, are revealing the neural circuits and computations involved in important visual behaviors, as well as the substrate and mechanisms underlying visual memories in decapod crustaceans.


2013 ◽  
Vol 280 (1762) ◽  
pp. 20130700 ◽  
Author(s):  
Jonathan Samir Matthis ◽  
Brett R. Fajen

How do humans achieve such remarkable energetic efficiency when walking over complex terrain such as a rocky trail? Recent research in biomechanics suggests that the efficiency of human walking over flat, obstacle-free terrain derives from the ability to exploit the physical dynamics of our bodies. In this study, we investigated whether this principle also applies to visually guided walking over complex terrain. We found that when humans can see the immediate foreground as little as two step lengths ahead, they are able to choose footholds that allow them to exploit their biomechanical structure as efficiently as they can with unlimited visual information. We conclude that when humans walk over complex terrain, they use visual information from two step lengths ahead to choose footholds that allow them to approximate the energetic efficiency of walking in flat, obstacle-free environments.


2003 ◽  
Vol 90 (5) ◽  
pp. 3330-3340 ◽  
Author(s):  
David E. Vaillancourt ◽  
Keith R. Thulborn ◽  
Daniel M. Corcos

Despite an intricate understanding of the neural mechanisms underlying visual and motor systems, it is not completely understood in which brain regions humans transfer visual information into motor commands. Furthermore, in the absence of visual information, the retrieval process for motor memory information remains unclear. We report an investigation where visuomotor and motor memory processes were separated from only visual and only motor activation. Subjects produced precision grip force during a functional MRI (fMRI) study that included four conditions: rest, grip force with visual feedback, grip force without visual feedback, and visual feedback only. Statistical and subtractive logic analyses segregated the functional process maps. There were three important observations. First, along with the well-established parietal and premotor cortical network, the anterior prefrontal cortex, putamen, ventral thalamus, lateral cerebellum, intermediate cerebellum, and the dentate nucleus were directly involved in the visuomotor transformation process. This activation occurred despite controlling for the visual input and motor output. Second, a detailed topographic orientation of visuomotor to motor/sensory activity was mapped for the premotor cortex, parietal cortex, and the cerebellum. Third, the retrieval of motor memory information was isolated in the dorsolateral prefrontal cortex, ventral prefrontal cortex, and anterior cingulate. The motor memory process did not extend to the supplementary motor area (SMA) and the basal ganglia. These findings provide evidence in humans for a model where a distributed network extends over cortical and subcortical regions to control the visuomotor transformation process used during visually guided tasks. In contrast, a localized network in the prefrontal cortex retrieves force output from memory during internally guided actions.


2010 ◽  
Vol 103 (2) ◽  
pp. 986-1006 ◽  
Author(s):  
Jacques-Étienne Andujar ◽  
Kim Lajoie ◽  
Trevor Drew

We tested the hypothesis that area 5 of the posterior parietal cortex (PPC) contributes to the planning of visually guided gait modifications. We recorded 121 neurons from the PPC of two cats during a task in which cats needed to process visual input to step over obstacles attached to a moving treadmill belt. During unobstructed locomotion, 64/121 (53%) of cells showed rhythmic activity. During steps over the obstacles, 102/121 (84%) of cells showed a significant change of their activity. Of these, 46/102 were unmodulated during the control task. We divided the 102 task-related cells into two groups on the basis of their discharge when the limb contralateral to the recording site was the first to pass over the obstacle. One group (41/102) was characterized by a brief, phasic discharge as the lead forelimb passed over the obstacle (Step-related cells). These cells were recorded primarily from area 5a. The other group (61/102) showed a progressive increase in activity prior to the onset of the swing phase in the modified limb and frequently diverged from control at least one step cycle before the gait modification (Step-advanced cells). Most of these cells were recorded in area 5b. In both groups, some cells maintained a fixed relationship to the activity of the contralateral forelimb regardless of which limb was the first to pass over the obstacle (limb-specific cells), whereas others changed their phase of activity so that they were always related to activity of the first limb to pass over the obstacle, either contralateral or ipsilateral (limb-independent cells). Limb-independent cells were more common among the Step-advanced cell population. We suggest that both populations of cells contribute to the gait modification and that the discharge characteristics of the Step-advanced cells are compatible with a contribution to the planning of the gait modification.


Vision ◽  
2020 ◽  
Vol 4 (4) ◽  
pp. 48
Author(s):  
Adam Reeves

In this paper, I discuss attention in terms of selecting visual information and acting on it. Selection has been taken as a bedrock concept in attention research since James (1890). Selective attention guides action by privileging some things at the expense of others. I formalize this notion with models which capture the relationship between input and output under the control of spatial and temporal attention, by attenuating or discarding certain inputs and by weighing energetic costs, speed, and accuracy in meeting pre-chosen goals. Examples are given from everyday visually guided actions, and from modeling data obtained from visual searches through temporal and spatial arrays and related research. The relation between selection, as defined here, and other forms of attention is discussed at the end.


2013 ◽  
Vol 373-375 ◽  
pp. 217-220
Author(s):  
Yacine Benbelkacem ◽  
Rosmiwati Mohd-Mokhtar

Rate of convergence to the desired pose to grasp an object using visual information may be important in some applications, such as a pick and place routine in assembly where the time between two stops of the conveyor is very short. The visually guided robot is required to move fast if vision is to bring the sought benefits to industrial setups. In this paper, the three most famous techniques to visual servoing, mainly the image-based, position-based and hybrid visual servoing are evaluated in terms of their speed of convergence to the grasping pose in a pick and place task of a momentarily motionless target. An alternative open-loop near-minimum time approach is also presented and tested on a 5DOF under-actuated robotic arm. The performance is compared and result shows significant reduction for its time of convergence, to the aforementioned techniques.


2018 ◽  
Vol 119 (5) ◽  
pp. 1947-1961 ◽  
Author(s):  
Abigail C. Gambrill ◽  
Regina L. Faulkner ◽  
Hollis T. Cline

The circuit controlling visually guided behavior in nonmammalian vertebrates, such as Xenopus tadpoles, includes retinal projections to the contralateral optic tectum, where visual information is processed, and tectal motor outputs projecting ipsilaterally to hindbrain and spinal cord. Tadpoles have an intertectal commissure whose function is unknown, but it might transfer information between the tectal lobes. Differences in visual experience between the two eyes have profound effects on the development and function of visual circuits in animals with binocular vision, but the effects on animals with fully crossed retinal projections are not clear. We tested the effect of monocular visual experience on the visuomotor circuit in Xenopus tadpoles. We show that cutting the intertectal commissure or providing visual experience to one eye (monocular visual experience) is sufficient to disrupt tectally mediated visual avoidance behavior. Monocular visual experience induces asymmetry in tectal circuit activity across the midline. Repeated exposure to monocular visual experience drives maturation of the stimulated retinotectal synapses, seen as increased AMPA-to-NMDA ratios, induces synaptic plasticity in intertectal synaptic connections, and induces bilaterally asymmetric changes in the tectal excitation-to-inhibition ratio (E/I). We show that unilateral expression of peptides that interfere with AMPA or GABAA receptor trafficking alters E/I in the transfected tectum and is sufficient to degrade visuomotor behavior. Our study demonstrates that monocular visual experience in animals with fully crossed visual systems produces asymmetric circuit function across the midline and degrades visuomotor behavior. The data further suggest that intertectal inputs are an integral component of a bilateral visuomotor circuit critical for behavior. NEW & NOTEWORTHY The developing optic tectum of Xenopus tadpoles represents a unique circuit in which laterally positioned eyes provide sensory input to a circuit that is transiently monocular, but which will be binocular in the animal’s adulthood. We challenge the idea that the two lobes of tadpole optic tectum function independently by testing the requirement of interhemispheric communication and demonstrate that unbalanced sensory input can induce structural and functional plasticity in the tectum sufficient to disrupt function.


2019 ◽  
Author(s):  
Suryadeep Dash ◽  
Tyler R. Peel ◽  
Stephen G. Lomber ◽  
Brian D. Corneil

AbstractExpress saccades (ESs) are a manifestation of a visual grasp reflex triggered when visual information arrives in the intermediate layers of the superior colliculus (SCi), which in turn orchestrates the lower level brainstem saccade generator to evoke a saccade with a very short latency (∼100ms). A prominent theory regarding express saccades generation is that they are facilitated by preparatory signals, presumably from cortical areas, which prime the SCi prior to the arrival of visual information. Here, we test this theory by reversibly inactivating a key cortical input to the SCi, the frontal eye fields (FEF), while monkeys perform an oculomotor task that promotes ES generation. Across three tasks with a different combination of potential target locations and uni- or bilateral FEF inactivation, we found a spared ability for monkeys to generate ESs, despite decreases in ES frequency during FEF inactivation. This result is consistent with the FEF having a facilitatory but not critical role in ES generation, likely because other cortical areas compensate for the loss of preparatory input to the SCi. However, we did find decreases in the accuracy and peak velocity of ESs generated during FEF inactivation, which argues for an influence of the FEF on the saccadic burst generator even during ESs. Overall, our results shed further light on the role of the FEF in the shortest-latency visually-guided eye movements.New & NoteworthyExpress saccades (ESs) are the shortest-latency visually-guided saccade. The frontal eye fields (FEF) is thought to promote ES by establishing the necessary preconditions in the superior colliculus. Here, by reversibly inactivate the FEF either unilaterally or bilaterally, we support this view by showing that the FEF plays an assistive but not critical role in ES generation. We also found that FEF inactivation lowered ES peak velocity, emphasizing a contribution of the FEF to ES kinematics.


2007 ◽  
Vol 98 (1) ◽  
pp. 488-501 ◽  
Author(s):  
M. A. Umilta ◽  
T. Brochier ◽  
R. L. Spinks ◽  
R. N. Lemon

To understand the relative contributions of primary motor cortex (M1) and area F5 of the ventral premotor cortex (PMv) to visually guided grasp, we made simultaneous multiple electrode recordings from the hand representations of these two areas in two adult macaque monkeys. The monkeys were trained to fixate, reach out and grasp one of six objects presented in a pseudorandom order. In M1 326 task-related neurons, 104 of which were identified as pyramidal tract neurons, and 138 F5 neurons were analyzed as separate populations. All three populations showed activity that distinguished the six objects grasped by the monkey. These three populations responded in a manner that generalized across different sets of objects. F5 neurons showed object/grasp related tuning earlier than M1 neurons in the visual presentation and premovement periods. Also F5 neurons generally showed a greater preference for particular objects/grasps than did M1 neurons. F5 neurons remained tuned to a particular grasp throughout both the premovement and reach-to-grasp phases of the task, whereas M1 neurons showed different selectivity during the different phases. We also found that different types of grasp appear to be represented by different overall levels of activity within the F5-M1 circuit. Altogether these properties are consistent with the notion that F5 grasping-related neurons play a role in translating visual information about the physical properties of an object into the motor commands that are appropriate for grasping, and which are elaborated within M1 for delivery to the appropriate spinal machinery controlling hand and digit muscles.


Sign in / Sign up

Export Citation Format

Share Document