A Novel Framework for Frame Rate Up Conversion by Predictive Variable Block-Size Motion Estimated Optical Flow

Author(s):  
Shing-Fat Tu ◽  
Oscar C. Au ◽  
Yannan Wu ◽  
Enming Luo ◽  
Chi-Ho Yeung
2016 ◽  
Vol 850 ◽  
pp. 121-128
Author(s):  
Şükrü Görgülü ◽  
Ömer Nezih Gerek

This study introduces a frame-rate up-conversion method that uses a temporal wavelet zerotree-based shrinkage algorithm over motion trajectory of a video obtained by optical flow. The method starts by optical flow estimation for predicting initial estimates of inserted frame pixels. Then, the predicted frame pixels are denoised using a specific wavelet-based algorithm, where each pixel location is examined independently through its own temporal motion path. The denoising was performed by shrinking zero-tree footprints to remove temporal oddities. The resulting video was observed to have more fluent temporal flow as compared to optical flow - only interpolation.


2010 ◽  
Vol 130 (8) ◽  
pp. 1431-1439 ◽  
Author(s):  
Hiroki Matsumoto ◽  
Fumito Kichikawa ◽  
Kazuya Sasazaki ◽  
Junji Maeda ◽  
Yukinori Suzuki

Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 991
Author(s):  
Yuta Nakahara ◽  
Toshiyasu Matsushima

In information theory, lossless compression of general data is based on an explicit assumption of a stochastic generative model on target data. However, in lossless image compression, researchers have mainly focused on the coding procedure that outputs the coded sequence from the input image, and the assumption of the stochastic generative model is implicit. In these studies, there is a difficulty in discussing the difference between the expected code length and the entropy of the stochastic generative model. We solve this difficulty for a class of images, in which they have non-stationarity among segments. In this paper, we propose a novel stochastic generative model of images by redefining the implicit stochastic generative model in a previous coding procedure. Our model is based on the quadtree so that it effectively represents the variable block size segmentation of images. Then, we construct the Bayes code optimal for the proposed stochastic generative model. It requires the summation of all possible quadtrees weighted by their posterior. In general, its computational cost increases exponentially for the image size. However, we introduce an efficient algorithm to calculate it in the polynomial order of the image size without loss of optimality. As a result, the derived algorithm has a better average coding rate than that of JBIG.


Sign in / Sign up

Export Citation Format

Share Document