The role of early visual input in the development of contour interpolation: the case of subjective contours

2016 ◽  
Vol 20 (3) ◽  
pp. e12379 ◽  
Author(s):  
Bat-Sheva Hadad ◽  
Daphne Maurer ◽  
Terri L. Lewis
2017 ◽  
Vol 7 (2) ◽  
pp. 177-202
Author(s):  
James A. Clinton ◽  
Stephen W. Briner ◽  
Andrew M. Sherrill ◽  
Thomas Ackerman ◽  
Joseph P. Magliano

Abstract Filmmakers must rely on cinematic devices of perspective (close-ups and point-of-view shot sequencing) to emphasize facial expressions associated with affective states. This study explored the extent to which differences in the use of these devices across two films that have the same content lead to differences in the understanding of the affective states of characters. Participants viewed one of two versions of the films and made affective judgments about how characters felt about one another with respect to saddness and anger. The extent to which the auditory and visual contexts were present when making the judgments was varied across four experiments. The results of the study showed judgments about sadness differed across the two films, but only when the entire context (sound and visual input) were present. The results are discussed in the context of the role of facial expressions and context in inferring basic emotions.


1973 ◽  
Vol 12 (5) ◽  
pp. 407-416 ◽  
Author(s):  
W.D. Winters ◽  
M. Alcaraz ◽  
M.Y. Cervantes ◽  
C. Guzman-Flores

Author(s):  
Kimron Shapiro ◽  
Simon Hanslmayr

Attention is the ubiquitous construct referring to the ability of the brain to focus resources on a subset of perceptual input which it is trying to process for a response. Attention has for a long time been studied with reference to its distribution across space where, for example, visual input from an attentionally monitored location is given preference over non-monitored (i.e. attended) locations. More recently, attention has been studied for its ability to select targets from among rapidly, sequentially presented non-targets at a fixed location, e.g. in visual space. The present chapter explores this latter function of attention for its relevance to behaviour. In so doing, it highlights what is becoming one of the most popular approaches to studying communication across the brain—oscillations—at various frequency ranges. In particular the authors discuss the alpha frequency band (8–12 Hz), where recent evidence points to an important role in the switching between processing external vs. internal events.


2010 ◽  
Vol 9 (8) ◽  
pp. 923-923 ◽  
Author(s):  
H.-P. Frey ◽  
M. Naber ◽  
W. Einhauser ◽  
J. Foxe

Author(s):  
SUNIL RAO ◽  
IGOR ALEKSANDER

We discuss the role of function application in neural models of visual awareness with reference to the handling of language understanding. The problem of modeling the application of functional relations mapping a set of given input variables to a defined output outcome is seen to arise in the context of enabling a visual awareness model to handle verbal and visual input for a variety of tasks. The inadequacy of previously presented approaches is placed in context, and a novel technical solution presented, derived from the demonstration of an adjective recognition scheme implemented for the handling of shape-color depiction. The extension of such an approach to the handling of motion verb recognition by means of path labeling and position identification in a depictive model of visual awareness is discussed. The practical effectiveness and utility of the presented scheme to a wider range of function applicative contexts is accordingly examined.


2021 ◽  
pp. 026765832110157
Author(s):  
Carmen Muñoz ◽  
Geòrgia Pujadas ◽  
Anastasiia Pattemore

This article addresses the benefits of audio-visual input for learning second language (L2) vocabulary and grammatical constructions. Specifically, it explores the role of frequency, the effects of subtitles and captions, and the mediating role of learner proficiency on language gains in two longitudinal studies. Study 1 targets vocabulary acquisition in two groups of adolescents with an elementary L2 proficiency level who view 24 episodes of a TV series spread weekly over a whole academic year, one group with subtitles (first language) and one with captions (second language). Study 2 targets grammar acquisition in two groups of university students with an intermediate proficiency level who view 10 episodes over five weeks, one group with captions and one without captions. Results of both studies show significant correlations between language gains and frequency in the input, but the size of the frequency effect appears to depend on the type of support provided by the on-screen text. The analyses also show no significant advantage of captions or subtitles for vocabulary learning at this proficiency level, a significant advantage of captions over no captions for grammatical constructions learning, as well as the significant role of proficiency. It is concluded that viewing audio-visual material leads to L2 learning and it can support learners in their preparation for study abroad and maximize their learning experience during their sojourn.


2006 ◽  
Vol 26 (41) ◽  
pp. 10368-10371 ◽  
Author(s):  
E. Gruberg ◽  
E. Dudkin ◽  
Y. Wang ◽  
G. Marin ◽  
C. Salas ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document