scholarly journals Fast Tracking of the Left Ventricle Using Global Anatomical Affine Optical Flow and Local Recursive Block Matching

2014 ◽  
Author(s):  
Daniel Barbosa ◽  
Denis Friboulet ◽  
Jan D'hooge ◽  
Olivier Bernard

We present a novel method for segmentation and tracking of the left ventricle (LV) in 4D ultrasound sequences using a combination of automatic segmentation at the end-diastolic frame and tracking using both a global optical flow-based tracker and local block matching. The core novelty of the proposed algorithm relies on the recursive formulation of the block-matching problem, which introduces temporal consistency on the patterns being tracked. The proposed method offers a competitive solution, with average segmentation errors of 2.29 and 2.26mm in the training (#=15) and testing (#=15) datasets respectively.

Author(s):  
Xueqing Liu ◽  
Xinghao Jiang ◽  
Tanfeng Sun ◽  
Ajing Xu

2020 ◽  
Vol 34 (07) ◽  
pp. 10713-10720
Author(s):  
Mingyu Ding ◽  
Zhe Wang ◽  
Bolei Zhou ◽  
Jianping Shi ◽  
Zhiwu Lu ◽  
...  

A major challenge for video semantic segmentation is the lack of labeled data. In most benchmark datasets, only one frame of a video clip is annotated, which makes most supervised methods fail to utilize information from the rest of the frames. To exploit the spatio-temporal information in videos, many previous works use pre-computed optical flows, which encode the temporal consistency to improve the video segmentation. However, the video segmentation and optical flow estimation are still considered as two separate tasks. In this paper, we propose a novel framework for joint video semantic segmentation and optical flow estimation. Semantic segmentation brings semantic information to handle occlusion for more robust optical flow estimation, while the non-occluded optical flow provides accurate pixel-level temporal correspondences to guarantee the temporal consistency of the segmentation. Moreover, our framework is able to utilize both labeled and unlabeled frames in the video through joint training, while no additional calculation is required in inference. Extensive experiments show that the proposed model makes the video semantic segmentation and optical flow estimation benefit from each other and outperforms existing methods under the same settings in both tasks.


2008 ◽  
Vol 05 (03) ◽  
pp. 223-233 ◽  
Author(s):  
RONG LIU ◽  
MAX Q. H. MENG

Time-to-contact (TTC) provides vital information for obstacle avoidance and for the visual navigation of a robot. In this paper, we present a novel method to estimate the TTC information of a moving object for monocular mobile robots. In specific, the contour of the moving object is extracted first using an active contour model; then the height of the motion contour and its temporal derivative are evaluated to generate the desired TTC estimates. Compared with conventional techniques employing the first-order derivatives of optical flow, the proposed estimator is less prone to errors of optical flow. Experiments using real-world images are conducted and the results demonstrate that the developed method can successfully achieve TTC with an average relative error (ARVE) of 0.039 with a single calibrated camera.


Sign in / Sign up

Export Citation Format

Share Document