scholarly journals The effects of magnitude on visually guided action and perception.

2016 ◽  
Vol 16 (12) ◽  
pp. 453 ◽  
Author(s):  
Gal Namdar ◽  
Tzvi Ganel
2019 ◽  
Vol 27 (6) ◽  
pp. 423-437 ◽  
Author(s):  
Luis H Favela

Cognitive systems are highly adaptable and flexible, such that action and perception capabilities can be achieved with the body in various ways, and incorporate features of the environment and nonbiological tools. Perceptual learning refers to enduring changes to a system’s ability to perceive and respond to environmental stimuli. Here I present an integrative framework for understanding how such capabilities occur in human–machine systems comprising brain–body–tool–environment interactions. Central to this work is the claim that the capacity for high degrees of adaptation, flexibility, and learning are possible because human–machine systems are soft-assembled systems, that is, systems whose material constitution is not rigidly constrained so as to achieve goals via a variety of configurations. I begin by presenting the foundations of the framework on offer: the concepts, methods, and theories of ecological psychology; embodied cognition; dynamical systems theory; and machine intelligence. Next, I apply the framework to the case of visually-guided action. I conclude by explaining how this framework provides the explanatory and investigative tools to understand human–machine perceptual systems as soft-assembled systems that span brains-bodies-tools-environments.


2005 ◽  
Vol 43 (2) ◽  
pp. 216-226 ◽  
Author(s):  
Jonathan S. Cant ◽  
David A. Westwood ◽  
Kenneth F. Valyear ◽  
Melvyn A. Goodale

1997 ◽  
Vol 8 (3) ◽  
pp. 224-230 ◽  
Author(s):  
Rick O. Gilmore ◽  
Mark H. Johnson

The extent to which infants combine visual (i e, retinal position) and nonvisual (eye or head position) spatial information in planning saccades relates to the issue of what spatial frame or frames of reference influence early visually guided action We explored this question by testing infants from 4 to 6 months of age on the double-step saccade paradigm, which has shown that adults combine visual and eye position information into an egocentric (head- or trunk-centered) representation of saccade target locations In contrast, our results imply that infants depend on a simple retinocentric representation at age 4 months, but by 6 months use egocentric representations more often to control saccade planning Shifts in the representation of visual space for this simple sensorimotor behavior may index maturation in cortical circuitry devoted to visual spatial processing in general


1994 ◽  
Vol 17 (2) ◽  
pp. 213-214 ◽  
Author(s):  
A. David Milner ◽  
David P. Carey ◽  
Monika Harvey

Neurocase ◽  
2006 ◽  
Vol 12 (5) ◽  
pp. 263-279 ◽  
Author(s):  
Ileana Amicuzi ◽  
Massimo Stortini ◽  
Maurizio Petrarca ◽  
Paola Di Giulio ◽  
Giuseppe Di Rosa ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document