scholarly journals Dynamic brain interactions during picture naming

2018 ◽  
Author(s):  
Aram Giahi Saravani ◽  
Kiefer J. Forseth ◽  
Nitin Tandon ◽  
Xaq Pitkow

AbstractBrain computations involve multiple processes by which sensory information is encoded and transformed to drive behavior. These computations are thought to be mediated by dynamic interactions between populations of neurons. Here we demonstrate that human brains exhibit a reliable sequence of neural interactions during speech production. We use an autoregressive hidden Markov model to identify dynamical network states exhibited by electrocorticographic signals recorded from human neurosurgical patients. Our method resolves dynamic latent network states on a trial-by-trial basis. We characterize individual network states according to the patterns of directional information flow between cortical regions of interest. These network states occur consistently and in a specific, interpretable sequence across trials and subjects: a fixed-length visual processing state is followed by a variable-length language state, and then by a terminal articulation state. This empirical evidence validates classical psycholinguistic theories that have posited such intermediate states during speaking. It further reveals these state dynamics are not localized to one brain area or one sequence of areas, but are instead a network phenomenon.

2019 ◽  
Author(s):  
N. Alex Cayco Gajic ◽  
Séverine Durand ◽  
Michael Buice ◽  
Ramakrishnan Iyer ◽  
Clay Reid ◽  
...  

SummaryHow neural populations represent sensory information, and how that representation is transformed from one brain area to another, are fundamental questions of neuroscience. The dorsolateral geniculate nucleus (dLGN) and primary visual cortex (V1) represent two distinct stages of early visual processing. Classic sparse coding theories propose that V1 neurons represent local features of images. More recent theories have argued that the visual pathway transforms visual representations to become increasingly linearly separable. To test these ideas, we simultaneously recorded the spiking activity of mouse dLGN and V1 in vivo. We find strong evidence for both sparse coding and linear separability theories. Surprisingly, the correlations between neurons in V1 (but not dLGN) were shaped as to be irrelevant for stimulus decoding, a feature which we show enables linear separability. Therefore, our results suggest that the dLGN-V1 transformation reshapes correlated variability in a manner that facilitates linear decoding while producing a sparse code.


2000 ◽  
Vol 84 (6) ◽  
pp. 2984-2997 ◽  
Author(s):  
Per Jenmalm ◽  
Seth Dahlstedt ◽  
Roland S. Johansson

Most objects that we manipulate have curved surfaces. We have analyzed how subjects during a prototypical manipulatory task use visual and tactile sensory information for adapting fingertip actions to changes in object curvature. Subjects grasped an elongated object at one end using a precision grip and lifted it while instructed to keep it level. The principal load of the grasp was tangential torque due to the location of the center of mass of the object in relation to the horizontal grip axis joining the centers of the opposing grasp surfaces. The curvature strongly influenced the grip forces required to prevent rotational slips. Likewise the curvature influenced the rotational yield of the grasp that developed under the tangential torque load due to the viscoelastic properties of the fingertip pulps. Subjects scaled the grip forces parametrically with object curvature for grasp stability. Moreover in a curvature-dependent manner, subjects twisted the grasp around the grip axis by a radial flexion of the wrist to keep the desired object orientation despite the rotational yield. To adapt these fingertip actions to object curvature, subjects could use both vision and tactile sensibility integrated with predictive control. During combined blindfolding and digital anesthesia, however, the motor output failed to predict the consequences of the prevailing curvature. Subjects used vision to identify the curvature for efficient feedforward retrieval of grip force requirements before executing the motor commands. Digital anesthesia caused little impairment of grip force control when subjects had vision available, but the adaptation of the twist became delayed. Visual cues about the form of the grasp surface obtained before contact was used to scale the grip force, whereas the scaling of the twist depended on visual cues related to object movement. Thus subjects apparently relied on different visuomotor mechanisms for adaptation of grip force and grasp kinematics. In contrast, blindfolded subjects used tactile cues about the prevailing curvature obtained after contact with the object for feedforward adaptation of both grip force and twist. We conclude that humans use both vision and tactile sensibility for feedforward parametric adaptation of grip forces and grasp kinematics to object curvature. Normal control of the twist action, however, requires digital afferent input, and different visuomotor mechanisms support the control of the grasp twist and the grip force. This differential use of vision may have a bearing to the two-stream model of human visual processing.


Author(s):  
Ronald H Stevens ◽  
Trysha L Galloway

Uncertainty is a fundamental property of neural computation that becomes amplified when sensory information does not match a person’s expectations of the world. Uncertainty and hesitation are often early indicators of potential disruption, and the ability to rapidly measure uncertainty would have implications for future educational and training efforts by targeting reflective discussions about past actions, supporting in-progress corrections, and generating forecasts about future disruptions. An approach is described combining neurodynamics and machine learning to provide quantitative measures of uncertainty. Models of neurodynamic information derived from electroencephalogram (EEG) brainwaves have provided detailed neurodynamic histories of US Navy submarine navigation team members. Persistent periods (25–30 s) of neurodynamic information were seen as discrete peaks when establishing the submarine’s position and were identified as periods of uncertainty by an artificial intelligence (AI) system previously trained to recognize the frequency, magnitude, and duration of different patterns of uncertainty in healthcare and student teams. Transition matrices of neural network states closely predicted the future uncertainty of the navigation team during the three minutes prior to a grounding event. These studies suggest that the dynamics of uncertainty may have common characteristics across teams and tasks and that forecasts of their short-term evolution can be estimated.


2018 ◽  
Vol 7 (3.12) ◽  
pp. 116
Author(s):  
N Vignesh ◽  
Meghachandra Srinivas Reddy.P ◽  
Nirmal Raja.G ◽  
Elamaram E ◽  
B Sudhakar

Eyes play important role in our day to day lives and are perhaps the most valuable gift we have. This world is visible to us because we are blessed with eyesight. But there are some people who lag this ability of visualizing these things. Due to this, they will undergo a lot of troubles o move comfortably in public places. Hence, wearable device should design for such visual impaired people. A smart shoe is wearable system design to provide directional information to visually impaired people. To provide smart and sensible navigation guidance to visually impaired people, the system has great potential especially when integrated with visual processing units. During the operation, the user is supposed to wear the shoes. When sensors will detect any obstacle, user will be informed through Android system being used by the user. The Smart Shoes along with the application on the Android system shall help the user in moving around independently.


2015 ◽  
Vol 27 (4) ◽  
pp. 832-841 ◽  
Author(s):  
Amanda K. Robinson ◽  
Judith Reinhard ◽  
Jason B. Mattingley

Sensory information is initially registered within anatomically and functionally segregated brain networks but is also integrated across modalities in higher cortical areas. Although considerable research has focused on uncovering the neural correlates of multisensory integration for the modalities of vision, audition, and touch, much less attention has been devoted to understanding interactions between vision and olfaction in humans. In this study, we asked how odors affect neural activity evoked by images of familiar visual objects associated with characteristic smells. We employed scalp-recorded EEG to measure visual ERPs evoked by briefly presented pictures of familiar objects, such as an orange, mint leaves, or a rose. During presentation of each visual stimulus, participants inhaled either a matching odor, a nonmatching odor, or plain air. The N1 component of the visual ERP was significantly enhanced for matching odors in women, but not in men. This is consistent with evidence that women are superior in detecting, discriminating, and identifying odors and that they have a higher gray matter concentration in olfactory areas of the OFC. We conclude that early visual processing is influenced by olfactory cues because of associations between odors and the objects that emit them, and that these associations are stronger in women than in men.


2019 ◽  
Vol 16 (157) ◽  
pp. 20190181 ◽  
Author(s):  
Lana Khaldy ◽  
Orit Peleg ◽  
Claudia Tocco ◽  
L. Mahadevan ◽  
Marcus Byrne ◽  
...  

Moving along a straight path is a surprisingly difficult task. This is because, with each ensuing step, noise is generated in the motor and sensory systems, causing the animal to deviate from its intended route. When relying solely on internal sensory information to correct for this noise, the directional error generated with each stride accumulates, ultimately leading to a curved path. In contrast, external compass cues effectively allow the animal to correct for errors in its bearing. Here, we studied straight-line orientation in two different sized dung beetles. This allowed us to characterize and model the size of the directional error generated with each step, in the absence of external visual compass cues ( motor error ) as well as in the presence of these cues ( compass and motor errors ). In addition, we model how dung beetles balance the influence of internal and external orientation cues as they orient along straight paths under the open sky. We conclude that the directional error that unavoidably accumulates as the beetle travels is inversely proportional to the step size of the insect, and that both beetle species weigh the two sources of directional information in a similar fashion.


2014 ◽  
Vol 26 (3) ◽  
pp. 621-634 ◽  
Author(s):  
Vaia Lestou ◽  
Judith Mi Lin Lam ◽  
Katie Humphreys ◽  
Zoe Kourtzi ◽  
Glyn W. Humphreys

Hierarchical models of visual processing assume that global pattern recognition is contingent on the progressive integration of local elements across larger spatial regions, operating from early through intermediate to higher-level cortical regions. Here, we present results from neuropsychological fMRI that refute such models. We report two patients, one with lesions to intermediate ventral regions and the other with damage around the intraparietal sulcus (IPS). The patient with ventral damage showed normal behavioral and BOLD responses to global Glass patterns. The patient with IPS damage was impaired in discriminating global patterns and showed a lack of significant responses to these patterns in intermediate visual regions spared by the lesion. However, this patient did show BOLD activity to translational patterns, where local element relations are important. These results suggest that activation of intermediate ventral regions is not necessary to code global patterns; instead global patterns are coded in a heterarchical fashion. High-level regions of dorsal cortex are necessary to generate global pattern coding in intermediate ventral regions; in contrast, local integration processes are not sufficient.


2017 ◽  
Vol 117 (2) ◽  
pp. 492-508 ◽  
Author(s):  
James E. Niemeyer ◽  
Michael A. Paradiso

Contrast sensitivity is fundamental to natural visual processing and an important tool for characterizing both visual function and clinical disorders. We simultaneously measured contrast sensitivity and neural contrast response functions and compared measurements in common laboratory conditions with naturalistic conditions. In typical experiments, a subject holds fixation and a stimulus is flashed on, whereas in natural vision, saccades bring stimuli into view. Motivated by our previous V1 findings, we tested the hypothesis that perceptual contrast sensitivity is lower in natural vision and that this effect is associated with corresponding changes in V1 activity. We found that contrast sensitivity and V1 activity are correlated and that the relationship is similar in laboratory and naturalistic paradigms. However, in the more natural situation, contrast sensitivity is reduced up to 25% compared with that in a standard fixation paradigm, particularly at lower spatial frequencies, and this effect correlates with significant reductions in V1 responses. Our data suggest that these reductions in natural vision result from fast adaptation on one fixation that lowers the response on a subsequent fixation. This is the first demonstration of rapid, natural-image adaptation that carries across saccades, a process that appears to constantly influence visual sensitivity in natural vision. NEW & NOTEWORTHY Visual sensitivity and activity in brain area V1 were studied in a paradigm that included saccadic eye movements and natural visual input. V1 responses and contrast sensitivity were significantly reduced compared with results in common laboratory paradigms. The parallel neural and perceptual effects of eye movements and stimulus complexity appear to be due to a form of rapid adaptation that carries across saccades.


1999 ◽  
Vol 09 (05) ◽  
pp. 397-403 ◽  
Author(s):  
TIM CHAPMAN ◽  
BARBARA WEBB

Crickets are able to extract directional information about a wind stimulus through the filiform hairs located on their cerci. This paper describes the design and testing of a neuromorphic sensor that aims to achieve a close correlation with both the physical and functional properties of these hairs. An integrate and fire neural network is used to process the sensory information in real time. The resulting system is shown to be capable of extracting directional information from a wind stimulus and producing an appropriate motor control pattern.


2016 ◽  
Author(s):  
Alla Brodski-Guerniero ◽  
Georg-Friedrich Paasch ◽  
Patricia Wollstadt ◽  
Ipek Özdemir ◽  
Joseph T. Lizier ◽  
...  

AbstractPredictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre-)activated prior knowledge serving these predictions are still unknown. Based on the idea that such pre-activated prior knowledge must be maintained until needed we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time-courses from magnetoencephalography (MEG) recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Pre-activation of prior knowledge for faces showed as alpha- and beta-band related AIS increases in content specific areas; these AIS increases were behaviourally relevant in brain area FFA. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Moreover, top-down transfer of predictions estimated by transfer entropy was associated with beta frequencies. Our results support accounts that activated prior knowledge and the corresponding predictions are signalled in low-frequency activity (<30 Hz).Significance statementOur perception is not only determined by the information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains like prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare – mostly because this kind of evidence requires making strong a-priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach we find that face-related prior knowledge and the derived predictions are represented and transferred in low-frequency brain activity.


Sign in / Sign up

Export Citation Format

Share Document