Local motion compensation in image sequences degraded by atmospheric turbulence: a comparative analysis of optical flow vs. block matching methods

Author(s):  
Claudia S. Huebner
2013 ◽  
pp. 43-58
Author(s):  
Marcelo Saval-Calvo ◽  
Jorge Azorín-López ◽  
Andrés Fuster-Guilló

In this chapter, a comparative analysis of basic segmentation methods of video sequences and their combinations is carried out. Analysis of different algorithms is based on the efficiency (true positive and false positive rates) and temporal cost to provide regions in the scene. These are two of the most important requirements of the design to provide to the tracking with segmentation in an efficient and timely manner constrained to the application. Specifically, methods using temporal information as Background Subtraction, Temporal Differencing, Optical Flow, and the four combinations of them have been analyzed. Experimentation has been done using image sequences of CAVIAR project database. Efficiency results show that Background Subtraction achieves the best individual result whereas the combination of the three basic methods is the best result in general. However, combinations with Optical Flow should be considered depending of application, because its temporal cost is too high with respect to efficiency provided to the combination.


Author(s):  
NAOYA OHNISHI ◽  
ATSUSHI IMIYA

In this paper, we present an algorithm for the hierarchical recognition of an environment using independent components of optical flow fields for the visual navigation of a mobile robot. For the computation of optical flow, the pyramid transform of an image sequence is used for the analysis of global and local motion. Our algorithm detects the planar region and obstacles in the image from optical flow fields at each layer in the pyramid. Therefore, our algorithm allows us to achieve both global perception and local perception for robot vision. We show experimental results for both test image sequences and real image sequences captured by a mobile robot. Furthermore, we show some aspects of this work from the viewpoint of information theory.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3722
Author(s):  
Byeongkeun Kang ◽  
Yeejin Lee

Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.


Sign in / Sign up

Export Citation Format

Share Document