scholarly journals A Causal Role of Area hMST for Self-Motion Perception in Humans

2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Constanze Schmitt ◽  
Bianca R Baltaretu ◽  
J Douglas Crawford ◽  
Frank Bremmer

Abstract Previous studies in the macaque monkey have provided clear causal evidence for an involvement of the medial-superior-temporal area (MST) in the perception of self-motion. These studies also revealed an overrepresentation of contraversive heading. Human imaging studies have identified a functional equivalent (hMST) of macaque area MST. Yet, causal evidence of hMST in heading perception is lacking. We employed neuronavigated transcranial magnetic stimulation (TMS) to test for such a causal relationship. We expected TMS over hMST to induce increased perceptual variance (i.e., impaired precision), while leaving mean heading perception (accuracy) unaffected. We presented 8 human participants with an optic flow stimulus simulating forward self-motion across a ground plane in one of 3 directions. Participants indicated perceived heading. In 57% of the trials, TMS pulses were applied, temporally centered on self-motion onset. TMS stimulation site was either right-hemisphere hMST, identified by a functional magnetic resonance imaging (fMRI) localizer, or a control-area, just outside the fMRI localizer activation. As predicted, TMS over area hMST, but not over the control-area, increased response variance of perceived heading as compared with noTMS stimulation trials. As hypothesized, this effect was strongest for contraversive self-motion. These data provide a first causal evidence for a critical role of hMST in visually guided navigation.

2021 ◽  
Vol 118 (32) ◽  
pp. e2106235118
Author(s):  
Reuben Rideaux ◽  
Katherine R. Storrs ◽  
Guido Maiello ◽  
Andrew E. Welchman

Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.


2014 ◽  
Vol 112 (10) ◽  
pp. 2470-2480 ◽  
Author(s):  
Andre Kaminiarz ◽  
Anja Schlack ◽  
Klaus-Peter Hoffmann ◽  
Markus Lappe ◽  
Frank Bremmer

The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.


2004 ◽  
Vol 91 (3) ◽  
pp. 1314-1326 ◽  
Author(s):  
Hilary W. Heuer ◽  
Kenneth H. Britten

The medial superior temporal area of extrastriate cortex (MST) contains signals selective for nonuniform patterns of motion often termed “optic flow.” The presence of such tuning, however, does not necessarily imply involvement in perception. To quantify the relationship between these selective neuronal signals and the perception of optic flow, we designed a discrimination task that allowed us to simultaneously record neuronal and behavioral sensitivities to near-threshold optic flow stimuli tailored to MST cells' preferences. In this two-alternative forced-choice task, we controlled the salience of globally opposite patterns (e.g., expansion and contraction) by varying the coherence of the motion. Using these stimuli, we could both relate the sensitivity of neuronal signals in MST to the animal's behavioral sensitivity and also measure trial-by-trial correlation between neuronal signals and behavioral choices. Neurons in MST showed a wide range of sensitivities to these complex motion stimuli. Many neurons had sensitivities equal or superior to the monkey's threshold. On the other hand, trial-by-trial correlation between neuronal discharge and choice (“choice probability”) was weak or nonexistent in our data. Together, these results lead us to conclude that MST contains sufficient information for threshold judgments of optic flow; however, the role of MST activity in optic flow discriminations may be less direct than in other visual motion tasks previously described by other laboratories.


2016 ◽  
Vol 115 (1) ◽  
pp. 286-300 ◽  
Author(s):  
Oliver W. Layton ◽  
Brett R. Fajen

Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.


2015 ◽  
Vol 27 (2) ◽  
pp. 266-279 ◽  
Author(s):  
Kamila Śmigasiewicz ◽  
Dariusz Asanowicz ◽  
Nicole Westphal ◽  
Rolf Verleger

Everyday experience suggests that people are equally aware of stimuli in both hemifields. However, when two streams of stimuli are rapidly presented left and right, the second target (T2) is better identified in the left hemifield than in the right hemifield. This left visual field (LVF) advantage may result from differences between hemifields in attracting attention. Therefore, we introduced a visual cue shortly before T2 onset to draw attention to one stream. Thus, to identify T2, attention was correctly positioned with valid cues but had to be redirected to the other stream with invalid ones. If the LVF advantage is caused by differences between hemifields in attracting attention, invalid cues should increase, and valid cues should reduce the LVF advantage as compared with neutral cues. This prediction was confirmed. ERP analysis revealed that cues evoked an early posterior negativity, confirming that attention was attracted by the cue. This negativity was earlier with cues in the LVF, which suggests that responses to salient events are faster in the right hemisphere than in the left hemisphere. Valid cues speeded up, and invalid cues delayed T2-evoked N2pc; in addition, valid cues enlarged T2-evoked P3. After N2pc, right-side T2 evoked more sustained contralateral negativity than left T2, least long-lasting after valid cues. Difficulties in identifying invalidly cued right T2 were reflected in prematurely ending P3 waveforms. Overall, these data provide evidence that the LVF advantage is because of different abilities of the hemispheres in shifting attention to relevant events in their contralateral hemifield.


2019 ◽  
Vol 121 (4) ◽  
pp. 1207-1221 ◽  
Author(s):  
Ryo Sasaki ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer’s self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.


2014 ◽  
Vol 111 (11) ◽  
pp. 2332-2342 ◽  
Author(s):  
Hong Xu ◽  
Pascal Wallisch ◽  
David C. Bradley

Self-motion generates patterns of optic flow on the retina. Neurons in the dorsal part of the medial superior temporal area (MSTd) are selective for these optic flow patterns. It has been shown that neurons in this area that are selective for expanding optic flow fields are involved in heading judgments. We wondered how subpopulations of MSTd neurons, those tuned for expansion, rotation or spiral motion, contribute to heading perception. To investigate this question, we recorded from neurons in area MSTd with diverse tuning properties, while the animals performed a heading-discrimination task. We found a significant trial-to-trial correlation (choice probability) between the MSTd neurons and the animals' decision. Neurons in different subpopulations did not differ significantly in terms of their choice probability. Instead, choice probability was strongly related to the sensitivity of the neuron in our sample, regardless of tuning preference. We conclude that a variety of subpopulations of MSTd neurons with different tuning properties contribute to heading judgments.


1989 ◽  
Vol 62 (3) ◽  
pp. 642-656 ◽  
Author(s):  
K. Tanaka ◽  
Y. Fukada ◽  
H. A. Saito

1. The dorsal part of medial superior temporal area (MST) has two unique types of visually responsive cells: 1) expansion/contraction cells, which selectively respond to either an expansion or a contraction; and 2) rotation cells, which selectively respond to either a clockwise or a counterclockwise rotation. In addition to selectivity for the mode of motion, both types of cells respond preferentially to movements over a wide field rather than over a small field. With the aim of understanding the underlying mechanisms of these selectivities, we carried out experiments on immobilized monkeys anesthetized with N2O. 2. Expansion/contraction and rotation of a pattern extending over a wide field contain three stimulus factors: 1) the spatial arrangement of different directions of movement, 2) the gradient in the speed of regional movement from the center to the periphery of the stimulus, and 3) the size change of texture components of the pattern in the expansion/contraction and the acceleration of movement of texture components toward the center of the stimulus in the rotation. The contribution of each factor to the activation of the cells was evaluated by comparing the response before and after removing the factor from the stimulus. The moving stimuli that lacked one or two of the factors were produced by the use of a cinematographic animation technique. 3. Withdrawal of the first factor, the spatial arrangement of different directions of movement, reduced the response of both Expansion/contraction and Rotation cells much more severely than either of the other two factors. We concluded that the first factor is far more important for activation than the other two. 4. These results are consistent with the model that Expansion/contraction and Rotation cells receive converging inputs from many directional cells with relatively small receptive fields in different parts of the visual field. Because MST receives strong fiber projections from MT, MT cells are candidates for the input cells. According to the model, if the convergence is organized so that the preferred directions of the input cells are arranged radially, the target cell will be an Expansion/contraction cell; if the input cells are arranged circularly, a Rotation cell will result.


2011 ◽  
Vol 105 (1) ◽  
pp. 60-68 ◽  
Author(s):  
Brian Lee ◽  
Bijan Pesaran ◽  
Richard A. Andersen

Visual signals generated by self-motion are initially represented in retinal coordinates in the early parts of the visual system. Because this information can be used by an observer to navigate through the environment, it must be transformed into body or world coordinates at later stations of the visual-motor pathway. Neurons in the dorsal aspect of the medial superior temporal area (MSTd) are tuned to the focus of expansion (FOE) of the visual image. We performed experiments to determine whether focus tuning curves in area MSTd are represented in eye coordinates or in screen coordinates (which could be head, body, or world-centered in the head-fixed paradigm used). Because MSTd neurons adjust their FOE tuning curves during pursuit eye movements to compensate for changes in pursuit and translation speed that distort the visual image, the coordinate frame was determined while the eyes were stationary (fixed gaze or simulated pursuit conditions) and while the eyes were moving (real pursuit condition). We recorded extracellular responses from 80 MSTd neurons in two rhesus monkeys ( Macaca mulatta). We found that the FOE tuning curves of the overwhelming majority of neurons were aligned in an eye-centered coordinate frame in each of the experimental conditions [fixed gaze: 77/80 (96%); real pursuit: 77/80 (96%); simulated pursuit 74/80 (93%); t-test, P < 0.05]. These results indicate that MSTd neurons represent heading in an eye-centered coordinate frame both when the eyes are stationary and when they are moving. We also found that area MSTd demonstrates significant eye position gain modulation of response fields much like its posterior parietal neighbors.


Sign in / Sign up

Export Citation Format

Share Document