movement detectors
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 1)

H-INDEX

17
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Matthew R Whiteway ◽  
Dan Biderman ◽  
Yoni Friedman ◽  
Mario Dipoppa ◽  
E. Kelly Buchanan ◽  
...  

AbstractRecent neuroscience studies in awake and behaving animals demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from this video data. In this work we introduce a new semi-supervised framework that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this method, the Partitioned Subspace Variational Autoencoder (PS-VAE), on head-fixed mouse behavioral videos. In a close up video of a mouse face, where we track pupil location and size, our method extracts unsupervised outputs that correspond to the eyelid and whisker pad positions, with no additional user annotations required. We use this resulting interpretable behavioral representation to construct saccade and whisking detectors, and quantify the accuracy with which these signals can be decoded from neural activity in visual cortex. In a two-camera mouse video we show how our method separates movements of experimental equipment from animal behavior, and extracts unsupervised features like chest position, again with no additional user annotation needed. This allows us to construct paw and body movement detectors, and decode individual features of behavior from widefield calcium imaging data. Our results demonstrate how the interpretable partitioning of behavioral videos provided by the PS-VAE can facilitate downstream behavioral and neural analyses.


2019 ◽  
Vol 25 (3) ◽  
pp. 263-311 ◽  
Author(s):  
Qinbing Fu ◽  
Hongxin Wang ◽  
Cheng Hu ◽  
Shigang Yue

Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging, and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modeling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research on insects' visual systems in the literature. These motion perception models or neural networks consist of the looming-sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation-sensitive neural systems of direction-selective neurons (DSNs) in fruit flies, bees, and locusts, and the small-target motion detectors (STMDs) in dragonflies and hoverflies. We also review the applications of these models to robots and vehicles. Through these modeling studies, we summarize the methodologies that generate different direction and size selectivity in motion perception. Finally, we discuss multiple systems integration and hardware realization of these bio-inspired motion perception models.


2008 ◽  
Vol 100 (2) ◽  
pp. 670-680 ◽  
Author(s):  
F. Claire Rind ◽  
Roger D. Santer ◽  
Geraldine A. Wright

Locusts have two large collision-detecting neurons, the descending contralateral movement detectors (DCMDs) that signal object approach and trigger evasive glides during flight. We sought to investigate whether vision for action, when the locust is in an aroused state rather than a passive viewer, significantly alters visual processing in this collision-detecting pathway. To do this we used two different approaches to determine how the arousal state of a locust affects the prolonged periods of high-frequency spikes typical of the DCMD response to approaching objects that trigger evasive glides. First, we manipulated arousal state in the locust by applying a brief mechanical stimulation to the hind leg; this type of change of state occurs when gregarious locusts accumulate in high-density swarms. Second, we examined DCMD responses during flight because flight produces a heightened physiological state of arousal in locusts. When arousal was induced by either method we found that the DCMD response recovered from a previously habituated state; that it followed object motion throughout approach; and—most important—that it was significantly more likely to generate the maintained spike frequencies capable of evoking gliding dives even with extremely short intervals (1.8 s) between approaches. Overall, tethered flying locusts responded to 41% of simulated approaching objects (sets of 6 with 1.8 s ISI). When we injected epinastine, the neuronal octopamine receptor antagonist, into the hemolymph responsiveness declined to 12%, suggesting that octopamine plays a significant role in maintaining responsiveness of the DCMD and the locust to visual stimuli during flight.


1998 ◽  
Vol 15 (1) ◽  
pp. 113-122 ◽  
Author(s):  
ANNE-KATHRIN WARZECHA ◽  
MARTIN EGELHAAF

It is often assumed that the ultimate goal of a motion-detection system is to faithfully represent the time-dependent velocity of a moving stimulus. This assumption, however, may be an arbitrary standard since the requirements for a motion-detection system depend on the task that is to be solved. In the context of optomotor course stabilization, the performance of a motion-sensitive neuron in the fly's optomotor pathway and of a hypothetical velocity sensor are compared for stimuli as are characteristic of a normal behavioral situation in which the actions and reactions of the animal directly affect its visual input. On average, tethered flies flying in a flight simulator are able to compensate to a large extent the retinal image displacements as are induced by an external disturbance of their flight course. The retinal image motion experienced by the fly under these behavioral closed-loop conditions was replayed in subsequent electrophysiological experiments to the animal while the activity of an identified neuron in the motion pathway was recorded. The velocity fluctuations as well as the corresponding neuronal signals were analyzed with a statistical approach taken from signal-detection theory. An observer scrutinizing either signal performs almost equally well in detecting the external disturbance.


1996 ◽  
Vol 351 (1347) ◽  
pp. 1579-1591 ◽  

Compensatory eye, head or body movements are essential to stabilize the gaze or the path of locomotion. Because such compensatory responses usually lag the sensory input by a time delay, the underlying control system is prone to instability, at least if it operates with a high gain in order to compensate disturbances efficiently. In behavioural experiments it could be shown that the optomotor system of the fly does not get unstable even when its overall gain is so high that, on average, imposed disturbances are compensated to a large extent. Fluctuations of the animal’s torque signal do not build up. Rather they are accompanied by only small-amplitude jittery retinal image displacements that rarely slip over more than a few neighbouring photoreceptors. Combined electrophysiological experiments on a pair of neurons in the fly’s optomotor pathway and model simulations of the optomotor control system suggest that this relative stability of the optomotor system is the consequence of the specific velocity dependence of biological movement detectors. The response of the movement detectors first increases with increasing velocity, reaches a maximum and then decreases again. As a consequence, large-amplitude fluctuations in pattern velocity, as are generated when the optomotor system tends to get unstable, are transmitted with a small gain leading to only relatively small torque fluctuations and, thus, small-amplitude image displacements.


1993 ◽  
Vol 10 (4) ◽  
pp. 643-652 ◽  
Author(s):  
Roland Kern ◽  
Hans-Ortwin Nalbach ◽  
Dezsö Varjú

AbstractWalking crabs move their eyes to compensate for retinal image motion only during rotation and not during translation, even when both components are superimposed. We tested in the rock crab, Pachygrapsus marmoratus, whether this ability to decompose optic flow may arise from topographical interactions of local movement detectors. We recorded the optokinetic eye movements of the rock crab in a sinusoidally oscillating drum which carried two 10-deg wide black vertical stripes. Their azimuthal separation varied from 20 to 180 deg, and each two-stripe configuration was presented at different azimuthal positions around the crab. In general, the responses are the stronger the more widely the stripes are separated. Furthermore, the response amplitude depends also strongly on the azimuthal positions of the stripes. We propose a model with excitatory interactions between pairs of movement detectors that quantitatively accounts for the enhanced optokinetic responses to widely separated textured patches in the visual field that move in phase. The interactions take place both within one eye and, predominantly, between both eyes. We conclude that these interactions aid in the detection of rotation.


Sign in / Sign up

Export Citation Format

Share Document