scholarly journals Temporal synchrony is an effective cue for grouping and segmentation in the absence of form cues

2016 ◽  
Vol 16 (11) ◽  
pp. 23 ◽  
Author(s):  
Reuben Rideaux ◽  
David R. Badcock ◽  
Alan Johnston ◽  
Mark Edwards
2002 ◽  
Vol 282 (1) ◽  
pp. H372-H379 ◽  
Author(s):  
Bradley T. Wyman ◽  
William C. Hunter ◽  
Frits W. Prinzen ◽  
Owen P. Faris ◽  
Elliot R. McVeigh

Resynchronization is frequently used for the treatment of heart failure, but the mechanism for improvement is not entirely clear. In the present study, the temporal synchrony and spatiotemporal distribution of left ventricular (LV) contraction was investigated in eight dogs during right atrial (RA), right ventricular apex (RVa), and biventricular (BiV) pacing using tagged magnetic resonance imaging. Mechanical activation (MA; the onset of circumferential shortening) was calculated from the images throughout the left ventricle for each pacing protocol. MA width (time for 20–90% of the left ventricle to contract) was significantly shorter during RA (43.6 ± 17.1 ms) than BiV and RVa pacing (67.4 ± 15.2 and 77.6 ± 16.4 ms, respectively). The activation delay vector (net delay in MA from one side of the left ventricle to the other) was significantly shorter during RA (18.9 ± 8.1 ms) and BiV (34.2 ± 18.3 ms) than during RVa (73.8 ± 16.3 ms) pacing. Rate of LV pressure increase was significantly lower during RVa than RA pacing (1,070 ± 370 vs. 1,560 ± 300 mmHg/s) with intermediate values for BiV pacing (1,310 ± 220 mmHg/s). BiV pacing has a greater impact on correcting the spatial distribution of LV contraction than on improving the temporal synchronization of contraction. Spatiotemporal distribution of contraction may be an important determinant of ventricular function.


2019 ◽  
Author(s):  
Adrien Doerig ◽  
Lynn Schmittwilken ◽  
Bilge Sayim ◽  
Mauro Manassi ◽  
Michael H. Herzog

AbstractClassically, visual processing is described as a cascade of local feedforward computations. Feedforward Convolutional Neural Networks (ffCNNs) have shown how powerful such models can be. However, using visual crowding as a well-controlled challenge, we previously showed that no classic model of vision, including ffCNNs, can explain human global shape processing (1). Here, we show that Capsule Neural Networks (CapsNets; 2), combining ffCNNs with recurrent grouping and segmentation, solve this challenge. We also show that ffCNNs and standard recurrent CNNs do not, suggesting that the grouping and segmentation capabilities of CapsNets are crucial. Furthermore, we provide psychophysical evidence that grouping and segmentation are implemented recurrently in humans, and show that CapsNets reproduce these results well. We discuss why recurrence seems needed to implement grouping and segmentation efficiently. Together, we provide mutually reinforcing psychophysical and computational evidence that a recurrent grouping and segmentation process is essential to understand the visual system and create better models that harness global shape computations.Author SummaryFeedforward Convolutional Neural Networks (ffCNNs) have revolutionized computer vision and are deeply transforming neuroscience. However, ffCNNs only roughly mimic human vision. There is a rapidly expanding body of literature investigating differences between humans and ffCNNs. Several findings suggest that, unlike humans, ffCNNs rely mostly on local visual features. Furthermore, ffCNNs lack recurrent connections, which abound in the brain. Here, we use visual crowding, a well-known psychophysical phenomenon, to investigate recurrent computations in global shape processing. Previously, we showed that no model based on the classic feedforward framework of vision can explain global effects in crowding. Here, we show that Capsule Neural Networks (CapsNets), combining ffCNNs with recurrent grouping and segmentation, solve this challenge. ffCNNs and recurrent CNNs with lateral and top-down recurrent connections do not, suggesting that grouping and segmentation are crucial for human-like global computations. Based on these results, we hypothesize that one computational function of recurrence is to efficiently implement grouping and segmentation. We provide psychophysical evidence that, indeed, grouping and segmentation is based on time consuming recurrent processes in the human brain. CapsNets reproduce these results too. Together, we provide mutually reinforcing computational and psychophysical evidence that a recurrent grouping and segmentation process is essential to understand the visual system and create better models that harness global shape computations.


2017 ◽  
Vol 124 (4) ◽  
pp. 483-504 ◽  
Author(s):  
Gregory Francis ◽  
Mauro Manassi ◽  
Michael H. Herzog

2011 ◽  
Vol 105 (2) ◽  
pp. 582-600 ◽  
Author(s):  
Pingbo Yin ◽  
Jeffrey S. Johnson ◽  
Kevin N. O'Connor ◽  
Mitchell L. Sutter

Conflicting results have led to different views about how temporal modulation is encoded in primary auditory cortex (A1). Some studies find a substantial population of neurons that change firing rate without synchronizing to temporal modulation, whereas other studies fail to see these nonsynchronized neurons. As a result, the role and scope of synchronized temporal and nonsynchronized rate codes in AM processing in A1 remains unresolved. We recorded A1 neurons' responses in awake macaques to sinusoidal AM noise. We find most (37–78%) neurons synchronize to at least one modulation frequency (MF) without exhibiting nonsynchronized responses. However, we find both exclusively nonsynchronized neurons (7–29%) and “mixed-mode” neurons (13–40%) that synchronize to at least one MF and fire nonsynchronously to at least one other. We introduce new measures for modulation encoding and temporal synchrony that can improve the analysis of how neurons encode temporal modulation. These include comparing AM responses to the responses to unmodulated sounds, and a vector strength measure that is suitable for single-trial analysis. Our data support a transformation from a temporally based population code of AM to a rate-based code as information ascends the auditory pathway. The number of mixed-mode neurons found in A1 indicates this transformation is not yet complete, and A1 neurons may carry multiplexed temporal and rate codes.


2016 ◽  
Vol 20 (3) ◽  
pp. e12381 ◽  
Author(s):  
Anne Hillairet de Boisferon ◽  
Amy H. Tift ◽  
Nicholas J. Minar ◽  
David J. Lewkowicz

2012 ◽  
Vol 53 (13) ◽  
pp. 8325 ◽  
Author(s):  
Pi-Chun Huang ◽  
Jinrong Li ◽  
Daming Deng ◽  
Minbin Yu ◽  
Robert F. Hess
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document