scholarly journals A motion pooling model of visually guided navigation explains human behavior in the presence of independently moving objects

2012 ◽  
Vol 12 (1) ◽  
pp. 20-20 ◽  
Author(s):  
O. W. Layton ◽  
E. Mingolla ◽  
N. A. Browning
Author(s):  
Akira Miyahara ◽  
◽  
Itaru Nagayama

In this paper, we propose an automated video surveillance system for kidnapping detection using featurebased characteristics. The localization of moving objects in a video stream and human behavior estimation are key techniques adopted by the proposed system. Some motion characteristics are determined from video streams, and using metrics such as a feature vector, the system automatically classifies the video streams into criminal and non-criminal scenes. The proposed system is called an intelligent security camera. We consider many types of scenarios for the training data set. After constructing the classifier, we use test sequences that are continuous video streams of human behavior consisting of several actions in succession. The experimental results show that the system can effectively detect criminal scenes, such as a kidnapping, by distinguishing human behavior.


1992 ◽  
Vol 337 (1281) ◽  
pp. 305-313 ◽  

I present a model for recovering the direction of heading of an observer who is moving relative to a scene that may contain self-moving objects. The model builds upon the work of Rieger & Lawton (1985) and Longuet-Higgins & Prazdny (1981), whose approach uses velocity differences computed in regions of high depth variation to locate the focus of expansion that indicates the observer’s heading direction. We present the results of computer simulations with natural and artificial images and relate the behaviour of the model to Psychophysical observations regarding heading judgements.


2021 ◽  
Author(s):  
Philipp Kreyenmeier ◽  
Luca Kaemmer ◽  
Jolande Fooken ◽  
Miriam Spering

Objects in our visual environment often move unpredictably and can suddenly speed up or slow down. The ability to account for acceleration when interacting with moving objects can be critical for survival. Here, we investigate how human observers track an accelerating target with their eyes and predict its time of reappearance after a temporal occlusion by making an interceptive hand movement. Before occlusion, the target was initially visible and accelerated for a brief period. We tested how observers integrated target motion information by comparing three alternative models that predicted time-to-contact (TTC) based on the (1) final target velocity sample before occlusion, (2) average target velocity before occlusion, or (3) target acceleration. We show that visually-guided smooth pursuit eye movements reliably reflect target acceleration prior to occlusion. However, systematic saccade and manual interception timing errors reveal an inability to consider acceleration when predicting TTC. Interception timing is best described by the final velocity model that relies on extrapolating the last available velocity sample before occlusion. These findings provide compelling evidence for differential acceleration integration mechanisms in vision-guided eye movements and prediction-guided interception and a mechanistic explanation for the function and failure of interactions with accelerating objects.


Perception ◽  
10.1068/p5405 ◽  
2005 ◽  
Vol 34 (6) ◽  
pp. 717-740 ◽  
Author(s):  
Brett R Fajen

Tasks such as steering, braking, and intercepting moving objects constitute a class of behaviors, known as visually guided actions, which are typically carried out under continuous control on the basis of visual information. Several decades of research on visually guided action have resulted in an inventory of control laws that describe for each task how information about the sufficiency of one's current state is used to make ongoing adjustments. Although a considerable amount of important research has been generated within this framework, several aspects of these tasks that are essential for successful performance cannot be captured. The purpose of this paper is to provide an overview of the existing framework, discuss its limitations, and introduce a new framework that emphasizes the necessity of calibration and perceptual learning. Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions. This resolves a long-lasting incompatibility between theories of visually guided action and the concept of an affordance. The implications of adopting this framework for the design of experiments and models of visually guided action are discussed.


Sign in / Sign up

Export Citation Format

Share Document