Brake lamp detection in complex and dynamic environments: Recognizing limitations of visual attention and perception

2012 ◽  
Vol 45 ◽  
pp. 588-599 ◽  
Author(s):  
Scott McIntyre ◽  
Leo Gugerty ◽  
Andrew Duchowski
1995 ◽  
Vol 17 (5) ◽  
pp. 654-667 ◽  
Author(s):  
J. Vincent Filoteo ◽  
Dean C. Delis ◽  
Mary J. Roman ◽  
Theresa Demadura ◽  
Emily Ford ◽  
...  

Author(s):  
Catherine Thompson ◽  
Sharon Coen

In this chapter, psychological theories of visual perception and attention are considered in relation to journalism. First, the chapter discusses so-called limited capacity processing—that is, humans are limited in the amount of information they can process at any one time. Next, journalists’ use of visual images is discussed. Although a picture ‘may be worth a thousand words’ (or more), journalists also need to take account of so-called ‘wishful seeing’—that people may see only what they want to see. The chapter then considers the phenomenon of ‘priming’ in relation to the way in which a story is framed, which may trigger particular concepts or stereotypes (positive or negative). Finally, the chapter considers emotional processing within journalism—how an individual’s emotional state may impact on their perceptions of a story, and how journalists may utilize emotion to influence audience engagement and comprehension.


2001 ◽  
Vol 20 (1) ◽  
pp. 39-65 ◽  
Author(s):  
Hector Yee ◽  
Sumanita Pattanaik ◽  
Donald P. Greenberg

2021 ◽  
Author(s):  
◽  
Arindam Bhakta

<p>Humans and many animals can selectively sample important parts of their visual surroundings to carry out their daily activities like foraging or finding prey or mates. Selective attention allows them to efficiently use the limited resources of the brain by deploying sensory apparatus to collect data believed to be pertinent to the organism's current task in hand.  Robots or other computational agents operating in dynamic environments are similarly exposed to a wide variety of stimuli, which they must process with limited sensory and computational resources. Developing computational models of visual attention has long been of interest as such models enable artificial systems to select necessary information from complex and cluttered visual environments, hence reducing the data-processing burden.  Biologically inspired computational saliency models have previously been used in selectively sampling a visual scene, but these have limited capacity to deal with dynamic environments and have no capacity to reason about uncertainty when planning their visual scene sampling strategy. These models typically select contrast in colour, shape or orientation as salient and sample locations of a visual scene in descending order of salience. After each observation, the area around the sampled location is blocked using inhibition of return mechanism to keep it from being re-visited.  This thesis generalises the traditional model of saliency by using an adaptive Kalman filter estimator to model an agent's understanding of the world and uses a utility function based approach to describe what the agent cares about in the visual scene. This allows the agents to adopt a richer set of perceptual strategies than is possible with the classical winner-take-all mechanism of the traditional saliency model. In contrast with the traditional approach, inhibition of return is achieved without implementing an extra mechanism on top of the underlying structure.  This thesis demonstrates the use of five utility functions that are used to encapsulate the perceptual state that is valued by the agent. Each utility function thereby produces a distinct perceptual behaviour that is matched to particular scenarios.  The resulting visual attention distribution of the five proposed utility functions is demonstrated on five real-life videos.  In most of the experiments, pixel intensity has been used as the source of the saliency map. As the proposed approach is independent of the saliency map used, it can be used with other existing more complex saliency map building models. Moreover, the underlying structure of the model is sufficiently general and flexible, hence it can be used as the base of a new range of more sophisticated gaze control systems.</p>


Leonardo ◽  
2020 ◽  
pp. 1-11
Author(s):  
Xinran Hu ◽  
Dinko Bačić

In this study, we use a novel eye-tracking technology to determine how viewing behavior complies with Wertheimer’s descriptions of Gestalt principles of similarity, proximity, continuation, and closure. Our results show that viewers respond predictably to the most Gestalt principles, while discovering important nuances when it comes to our better understanding of the role of visual attention in closure principle and competing principles. In addition, our results revealed a fundamental distinction between visual attention and visual perception. By grasping this critical difference between attention and perception, designers may become more successful in applying Gestalt principles to their design.


2019 ◽  
Vol 21 (2) ◽  
pp. 96-102 ◽  
Author(s):  
Halley Darrach ◽  
Lisa E. Ishii ◽  
David Liao ◽  
Jason C. Nellis ◽  
Kristin Bater ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document