scholarly journals Effect of Depth Order on Linear Vection with Optical Flows

i-Perception ◽  
10.1068/i0671 ◽  
2014 ◽  
Vol 5 (7) ◽  
pp. 630-640 ◽  
Author(s):  
Yasuhiro Seya ◽  
Takayuki Tsuji ◽  
Hiroyuki Shinoda
2020 ◽  
Vol 34 (07) ◽  
pp. 10713-10720
Author(s):  
Mingyu Ding ◽  
Zhe Wang ◽  
Bolei Zhou ◽  
Jianping Shi ◽  
Zhiwu Lu ◽  
...  

A major challenge for video semantic segmentation is the lack of labeled data. In most benchmark datasets, only one frame of a video clip is annotated, which makes most supervised methods fail to utilize information from the rest of the frames. To exploit the spatio-temporal information in videos, many previous works use pre-computed optical flows, which encode the temporal consistency to improve the video segmentation. However, the video segmentation and optical flow estimation are still considered as two separate tasks. In this paper, we propose a novel framework for joint video semantic segmentation and optical flow estimation. Semantic segmentation brings semantic information to handle occlusion for more robust optical flow estimation, while the non-occluded optical flow provides accurate pixel-level temporal correspondences to guarantee the temporal consistency of the segmentation. Moreover, our framework is able to utilize both labeled and unlabeled frames in the video through joint training, while no additional calculation is required in inference. Extensive experiments show that the proposed model makes the video semantic segmentation and optical flow estimation benefit from each other and outperforms existing methods under the same settings in both tasks.


2004 ◽  
Vol 4 (10) ◽  
pp. 1 ◽  
Author(s):  
Jay Hegdé ◽  
Thomas D. Albright ◽  
Gene R. Stoner

Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2523 ◽  
Author(s):  
Gangik Cho ◽  
Jongyun Kim ◽  
Hyondong Oh

Due to payload restrictions for micro aerial vehicles (MAVs), vision-based approaches have been widely studied with their light weight characteristics and cost effectiveness. In particular, optical flow-based obstacle avoidance has proven to be one of the most efficient methods in terms of obstacle avoidance capabilities and computational load; however, existing approaches do not consider 3-D complex environments. In addition, most approaches are unable to deal with situations where there are wall-like frontal obstacles. Although some algorithms consider wall-like frontal obstacles, they cause a jitter or unnecessary motion. To address these limitations, this paper proposes a vision-based obstacle avoidance algorithm for MAVs using the optical flow in 3-D textured environments. The image obtained from a monocular camera is first split into two horizontal and vertical half planes. The desired heading direction and climb rate are then determined by comparing the sum of optical flows between half planes horizontally and vertically, respectively, for obstacle avoidance in 3-D environments. Besides, the proposed approach is capable of avoiding wall-like frontal obstacles by considering the divergence of the optical flow at the focus of expansion and navigating to the goal position using a sigmoid weighting function. The performance of the proposed algorithm was validated through numerical simulations and indoor flight experiments in various situations.


2020 ◽  
Vol 34 (07) ◽  
pp. 12233-12240
Author(s):  
Wenjing Wang ◽  
Jizheng Xu ◽  
Li Zhang ◽  
Yue Wang ◽  
Jiaying Liu

Recently, neural style transfer has drawn many attentions and significant progresses have been made, especially for image style transfer. However, flexible and consistent style transfer for videos remains a challenging problem. Existing training strategies, either using a significant amount of video data with optical flows or introducing single-frame regularizers, have limited performance on real videos. In this paper, we propose a novel interpretation of temporal consistency, based on which we analyze the drawbacks of existing training strategies; and then derive a new compound regularization. Experimental results show that the proposed regularization can better balance the spatial and temporal performance, which supports our modeling. Combining with the new cost formula, we design a zero-shot video style transfer framework. Moreover, for better feature migration, we introduce a new module to dynamically adjust inter-channel distributions. Quantitative and qualitative results demonstrate the superiority of our method over other state-of-the-art style transfer methods. Our project is publicly available at: https://daooshee.github.io/CompoundVST/.


Sign in / Sign up

Export Citation Format

Share Document