The Flash-Lag Effect during Illusory Chopstick Rotation

Perception ◽  
10.1068/p5653 ◽  
2007 ◽  
Vol 36 (7) ◽  
pp. 1043-1048 ◽  
Author(s):  
Stuart Anstis

In the ‘flash-lag’ effect, a static object that is briefly flashed next to a moving object appears to lag behind the moving object. A flash was put up next to an intersection that appeared to be moving clockwise along a circular path but was actually moving counterclockwise [the chopstick illusion; Anstis, 1990, in AI and the Eye Eds A Blake, T Troscianko (London: John Wiley) pp 105–117; 2003, in Levels of Perception Eds L Harris, M Jenkin (New York: Springer) pp 90–93]. As a result, the flash appeared displaced clockwise. This was appropriate to the physical, not the subjective, direction of rotation, and it suggests that the flash-lag illusion occurs early in the visual system, before motion signals are parsed into moving objects.

Perception ◽  
10.1068/p3066 ◽  
2000 ◽  
Vol 29 (6) ◽  
pp. 675-692 ◽  
Author(s):  
Beena Khurana ◽  
Katsumi Watanabe ◽  
Romi Nijhawan

Objects flashed in alignment with moving objects appear to lag behind [Nijhawan, 1994 Nature (London) 370 256–257], Could this ‘flash-lag’ effect be due to attentional delays in bringing flashed items to perceptual awareness [Titchener, 1908/1973 Lectures on the Elementary Psychology of Feeling and Attention first published 1908 (New York: Macmillan); reprinted 1973 (New York: Arno Press)]? We overtly manipulated attentional allocation in three experiments to address the following questions: Is the flash-lag effect affected when attention is (a) focused on a single event in the presence of multiple events, (b) distributed over multiple events, and (c) diverted from the flashed object? To address the first two questions, five rings, moving along a circular path, were presented while observers attentively tracked one or multiple rings under four conditions: the ring in which the disk was flashed was (i) known or (ii) unknown (randomly selected from the set of five); location of the flashed disk was (i) known or (ii) unknown (randomly selected from ten locations), The third question was investigated by using two moving objects in a cost – benefit cueing paradigm, An arrow cued, with 70% or 80% validity, the position of the flashed object, Observers performed two tasks: (a) reacted as quickly as possible to flash onset; (b) reported the flash-lag effect, We obtained a significant and unaltered flash-lag effect under all the attentional conditions we employed, Furthermore, though reaction times were significantly shorter for validly cued flashes, the flash-lag effect remained uninfluenced by cue validity, indicating that quicker responses to validly cued locations may be due to the shortening of post-perceptual delays in motor responses rather than the perceptual facilitation, We conclude that the computations that give rise to the flash-lag effect are independent of attentional deployment.


2001 ◽  
Vol 13 (6) ◽  
pp. 1243-1253 ◽  
Author(s):  
Rajesh P. N. Rao ◽  
David M. Eagleman ◽  
Terrence J. Sejnowski

When a flash is aligned with a moving object, subjects perceive the flash to lag behind the moving object. Two different models have been proposed to explain this “flash-lag” effect. In the motion extrapolation model, the visual system extrapolates the location of the moving object to counteract neural propagation delays, whereas in the latency difference model, it is hypothesized that moving objects are processed and perceived more quickly than flashed objects. However, recent psychophysical experiments suggest that neither of these interpretations is feasible (Eagleman & Sejnowski, 2000a, 2000b, 2000c), hypothesizing instead that the visual system uses data from the future of an event before committing to an interpretation. We formalize this idea in terms of the statistical framework of optimal smoothing and show that a model based on smoothing accounts for the shape of psychometric curves from a flash-lag experiment involving random reversals of motion direction. The smoothing model demonstrates how the visual system may enhance perceptual accuracy by relying not only on data from the past but also on data collected from the immediate future of an event.


2019 ◽  
Author(s):  
Samson Chota ◽  
Rufin VanRullen

AbstractIt has long been debated whether visual processing is, at least partially, a discrete process. Although vision appears to be a continuous stream of sensory information, sophisticated experiments reveal periodic modulations of perception and behavior. Previous work has demonstrated that the phase of endogenous neural oscillations in the 10 Hz range predicts the “lag” of the flash lag effect, a temporal visual illusion in which a static object is perceived to be lagging in time behind a moving object. Consequently, it has been proposed that the flash lag illusion could be a manifestation of a periodic, discrete sampling mechanism in the visual system. In this experiment we set out to causally test this hypothesis by entraining the visual system to a periodic 10 Hz stimulus and probing the flash lag effect (FLE) at different time points during entrainment. We hypothesized that the perceived FLE would be modulated over time, at the same frequency as the entrainer (10 Hz). A frequency analysis of the average FLE time-course indeed reveals a significant peak at 10 Hz as well as a strong phase consistency between subjects (N=26). Our findings provide evidence for a causal relationship between alpha oscillations and fluctuations in temporal perception.


2017 ◽  
Author(s):  
Keith Schneider

The Fröhlich effect and flash-lag effect, in which moving objects appear advanced along their trajectories compared to their actual positions, have defied a simple and consistent explanation. Here, we show that these illusions can be understood as a natural consequence of temporal compression in the human visual system. Discrete sampling at some stage of sensory perception has long been considered, and if it were true, it would necessarily lead to these illusions of motion. We show that the discrete perception hypothesis, with a single free parameter, the perceptual moment or sampling rate, can quantitatively explain all of the scenarios of the Fröhlich and flash-lag effects. We interpret discrete perception as the implementation of data compression in the brain, and our conscious perception as the reconstruction of the compressed input.


2020 ◽  
Vol 13 (1) ◽  
pp. 60
Author(s):  
Chenjie Wang ◽  
Chengyuan Li ◽  
Jun Liu ◽  
Bin Luo ◽  
Xin Su ◽  
...  

Most scenes in practical applications are dynamic scenes containing moving objects, so accurately segmenting moving objects is crucial for many computer vision applications. In order to efficiently segment all the moving objects in the scene, regardless of whether the object has a predefined semantic label, we propose a two-level nested octave U-structure network with a multi-scale attention mechanism, called U2-ONet. U2-ONet takes two RGB frames, the optical flow between these frames, and the instance segmentation of the frames as inputs. Each stage of U2-ONet is filled with the newly designed octave residual U-block (ORSU block) to enhance the ability to obtain more contextual information at different scales while reducing the spatial redundancy of the feature maps. In order to efficiently train the multi-scale deep network, we introduce a hierarchical training supervision strategy that calculates the loss at each level while adding knowledge-matching loss to keep the optimization consistent. The experimental results show that the proposed U2-ONet method can achieve a state-of-the-art performance in several general moving object segmentation datasets.


1979 ◽  
Vol 49 (2) ◽  
pp. 343-346 ◽  
Author(s):  
Marcella V. Ridenour

30 boys and 30 girls, 6 yr. old, participated in a study assessing the influence of the visual patterns of moving objects and their respective backgrounds on the prediction of objects' directionality. An apparatus was designed to permit modified spherical objects with interchangeable covers and backgrounds to move in three-dimensional space in three directions at selected speeds. The subject's task was to predict one of three possible directions of an object: the object either moved toward the subject's midline or toward a point 18 in. to the left or right of the midline. The movements of all objects started at the same place which was 19.5 ft. in front of the subject. Prediction time was recorded on 15 trials. Analysis of variance indicated that visual patterns of the moving object did not influence the prediction of the object's directionality. Visual patterns of the background behind the moving object did not influence the prediction of the object's directionality except during the conditions of a light nonpatterned moving object. It was concluded that visual patterns of the background and that of the moving object have a very limited influence on the prediction of direction.


With the advent in technology, security and authentication has become the main aspect in computer vision approach. Moving object detection is an efficient system with the goal of preserving the perceptible and principal source in a group. Surveillance is one of the most crucial requirements and carried out to monitor various kinds of activities. The detection and tracking of moving objects are the fundamental concept that comes under the surveillance systems. Moving object recognition is challenging approach in the field of digital image processing. Moving object detection relies on few of the applications which are Human Machine Interaction (HMI), Safety and video Surveillance, Augmented Realism, Transportation Monitoring on Roads, Medical Imaging etc. The main goal of this research is the detection and tracking moving object. In proposed approach, based on the pre-processing method in which there is extraction of the frames with reduction of dimension. It applies the morphological methods to clean the foreground image in the moving objects and texture based feature extract using component analysis method. After that, design a novel method which is optimized multilayer perceptron neural network. It used the optimized layers based on the Pbest and Gbest particle position in the objects. It finds the fitness values which is binary values (x_update, y_update) of swarm or object positions. Method and output achieved final frame creation of the moving objects in the video using BLOB ANALYSER In this research , an application is designed using MATLAB VERSION 2016a In activation function to re-filter the given input and final output calculated with the help of pre-defined sigmoid. In proposed methods to find the clear detection and tracking in the given dataset MOT, FOOTBALL, INDOOR and OUTDOOR datasets. To improve the detection accuracy rate, recall rate and reduce the error rates, False Positive and Negative rate and compare with the various classifiers such as KNN, MLPNN and J48 decision Tree.


2015 ◽  
Author(s):  
Manivannan Subramaniyan ◽  
Alexander S. Ecker ◽  
Saumil S. Patel ◽  
R. James Cotton ◽  
Matthias Bethge ◽  
...  

AbstractWhen the brain has determined the position of a moving object, due to anatomical and processing delays, the object will have already moved to a new location. Given the statistical regularities present in natural motion, the brain may have acquired compensatory mechanisms to minimize the mismatch between the perceived and the real position of a moving object. A well-known visual illusion — the flash lag effect — points towards such a possibility. Although many psychophysical models have been suggested to explain this illusion, their predictions have not been tested at the neural level, particularly in a species of animal known to perceive the illusion. Towards this, we recorded neural responses to flashed and moving bars from primary visual cortex (V1) of awake, fixating macaque monkeys. We found that the response latency to moving bars of varying speed, motion direction and luminance was shorter than that to flashes, in a manner that is consistent with psychophysical results. At the level of V1, our results support the differential latency model positing that flashed and moving bars have different latencies. As we found a neural correlate of the illusion in passively fixating monkeys, our results also suggest that judging the instantaneous position of the moving bar at the time of flash — as required by the postdiction/motion-biasing model — may not be necessary for observing a neural correlate of the illusion. Our results also suggest that the brain may have evolved mechanisms to process moving stimuli faster and closer to real time compared with briefly appearing stationary stimuli.New and NoteworthyWe report several observations in awake macaque V1 that provide support for the differential latency model of the flash lag illusion. We find that the equal latency of flash and moving stimuli as assumed by motion integration/postdiction models does not hold in V1. We show that in macaque V1, motion processing latency depends on stimulus luminance, speed and motion direction in a manner consistent with several psychophysical properties of the flash lag illusion.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Yizhong Yang ◽  
Qiang Zhang ◽  
Pengfei Wang ◽  
Xionglou Hu ◽  
Nengju Wu

Moving object detection in video streams is the first step of many computer vision applications. Background modeling and subtraction for moving detection is the most common technique for detecting, while how to detect moving objects correctly is still a challenge. Some methods initialize the background model at each pixel in the first N frames. However, it cannot perform well in dynamic background scenes since the background model only contains temporal features. Herein, a novel pixelwise and nonparametric moving object detection method is proposed, which contains both spatial and temporal features. The proposed method can accurately detect the dynamic background. Additionally, several new mechanisms are also proposed to maintain and update the background model. The experimental results based on image sequences in public datasets show that the proposed method provides the robustness and effectiveness in dynamic background scenes compared with the existing methods.


Sign in / Sign up

Export Citation Format

Share Document