Analysis of the optical flow model applied to the motion estimation of sea ice from ERS-1 SAR image sequences

Author(s):  
Aisheng Li ◽  
Jan Askne
2018 ◽  
Vol 2018 ◽  
pp. 1-6 ◽  
Author(s):  
Bin Zhu ◽  
Lianfang Tian ◽  
Qiliang Du ◽  
Qiuxia Wu ◽  
Lixin Shi

The Horn and Schunck (HS) optical flow model cannot preserve discontinuity of motion estimation and has low accuracy especially for the image sequence, which includes complex texture. To address this problem, an improved fractional-order optical flow model is proposed. In particular, the fractional-order Taylor series expansion is applied in the brightness constraint equation of the HS model. The fractional-order flow field derivative is also used in the smoothing constraint equation. The Euler-Lagrange equation is utilized for the minimization of the energy function of the fractional-order optical flow model. Two-dimensional fractional differential masks are proposed and applied to the calculation of the model simplification. Considering the spatiotemporal memory property of fractional-order, the algorithm preserves the edge discontinuity of the optical flow field while improving the accuracy of the estimation of the dense optical flow field. Experiments on Middlebury datasets demonstrate the predominance of our proposed algorithm.


2019 ◽  
Vol 13 (3) ◽  
pp. 277-284 ◽  
Author(s):  
Bin Zhu ◽  
Lian‐Fang Tian ◽  
Qi‐Liang Du ◽  
Qiu‐Xia Wu ◽  
Farisi Zeyad Sahl ◽  
...  

Author(s):  
Dali Chen ◽  
Hu Sheng ◽  
YangQuan Chen ◽  
Dingyü Xue

A new class of fractional-order variational optical flow models, which generalizes the differential of optical flow from integer order to fractional order, is proposed for motion estimation in this paper. The corresponding Euler–Lagrange equations are derived by solving a typical fractional variational problem, and the numerical implementation based on the Grünwald–Letnikov fractional derivative definition is proposed to solve these complicated fractional partial differential equations. Theoretical analysis reveals that the proposed fractional-order variational optical flow model is the generalization of the typical Horn and Schunck (first-order) variational optical flow model and the second-order variational optical flow model, which provides a new idea for us to study the optical flow model and has an important theoretical implication in optical flow model research. The experiments demonstrate the validity of the generalization of differential order.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3722
Author(s):  
Byeongkeun Kang ◽  
Yeejin Lee

Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 222
Author(s):  
Baigan Zhao ◽  
Yingping Huang ◽  
Hongjian Wei ◽  
Xing Hu

Visual odometry (VO) refers to incremental estimation of the motion state of an agent (e.g., vehicle and robot) by using image information, and is a key component of modern localization and navigation systems. Addressing the monocular VO problem, this paper presents a novel end-to-end network for estimation of camera ego-motion. The network learns the latent subspace of optical flow (OF) and models sequential dynamics so that the motion estimation is constrained by the relations between sequential images. We compute the OF field of consecutive images and extract the latent OF representation in a self-encoding manner. A Recurrent Neural Network is then followed to examine the OF changes, i.e., to conduct sequential learning. The extracted sequential OF subspace is used to compute the regression of the 6-dimensional pose vector. We derive three models with different network structures and different training schemes: LS-CNN-VO, LS-AE-VO, and LS-RCNN-VO. Particularly, we separately train the encoder in an unsupervised manner. By this means, we avoid non-convergence during the training of the whole network and allow more generalized and effective feature representation. Substantial experiments have been conducted on KITTI and Malaga datasets, and the results demonstrate that our LS-RCNN-VO outperforms the existing learning-based VO approaches.


Sign in / Sign up

Export Citation Format

Share Document