The Art of Video Encoding: Optimizing MPEG Video Compression through Human-Assist Methods

Author(s):  
Mauro Bonomi
Author(s):  
Hanadi Yahya Darwisho

In view of the increasing use of mobile devices in addition to the high demand for the applications provided by these devices including video streaming, most companies have tended to pay attention to the mobile ad hoc networks and search for solutions to the problems and obstacles that hindered the process of sending video in this type of networks. One of the solutions discussed: Solutions at the level of video compression technology where many of the standards that were used in the process of video encoding and that provide good video quality by using a few bits in the coding process as provided us with acceptable bandwidth for the user and are flexible in handling with errors. These standards are: "H. 264/ MPEG-4 part 10".  There was also a solution at the level of routing during the transmission of video in real time over the mobile ad hoc networks so in this paper has been studied OLSR routing protocol that support the transmission of video on the basis of delay, network load and throughput and evaluated the performance by changing the model where the node is located (Node Placement Model) in a large network and in a small network as well as for different video resolutions.  


Video compression is a very complex and time consuming task which generally pursuit high performance. Motion Estimation (ME) process in any video encoder is responsible to primarily achieve the colossal performance which contributes to significant compression gain. Summation of Absolute Difference (SAD) is widely applied as distortion metric for ME process. With the increase in block size to 64×64 for real time applications along with the introduction of asymmetric mode motion partitioning(AMP) in High Efficiency Video Encoding (HEVC)causes variable block size motion estimation very convoluted. This results in increase in computational time and demands for significant requirement of hardware resources. In this paper parallel SAD hardware circuit for ME process in HEVC is propound where parallelism is used at various levels. The propound circuit has been implemented using Xilinx Virtex-5 FPGA for XC5VLX20T family. Synthesis results shows that the propound circuit provides significant reduction in delay and increase in frequency in comparison with results of other parallel architectures.


2018 ◽  
Vol 8 (1) ◽  
pp. 38-56 ◽  
Author(s):  
Shailesh D. Kamble ◽  
Sonam T. Khawase ◽  
Nileshsingh V. Thakur ◽  
Akshay V. Patharkar

Motion estimation has traditionally been used in video encoding only, however, it can also be used to solve various real-life problems. Nowadays, researchers from different fields are turning towards motion estimation. Motion estimation has become a serious problem in many video applications. It is a very important part of video compression technique and it provides improved bit rate reduction and coding efficiency. The process of motion estimation is used to improve compression quality and it also reduces computation time. Block-based motion estimation algorithms are used as they require less memory for processing of any video file. It also reduces the complexity involved in computations. In this article, various block-matching motion estimation algorithms are discussed such as Full search (FS) or Exhaust Search, Three-Step search (TSS), New Three-Step search (NTSS), Four-Step search (FSS), Diamond search (DS) etc.


2007 ◽  
Vol 17 (04) ◽  
pp. 289-304 ◽  
Author(s):  
NICOLAS TSAPATSOULIS ◽  
KONSTANTINOS RAPANTZIKOS ◽  
CONSTANTINOS PATTICHIS

In this paper we propose a novel saliency-based computational model for visual attention. This model processes both top-down (goal directed) and bottom-up information. Processing in the top-down channel creates the so called skin conspicuity map and emulates the visual search for human faces performed by humans. This is clearly a goal directed task but is generic enough to be context independent. Processing in the bottom-up information channel follows the principles set by Itti et al. but it deviates from them by computing the orientation, intensity and color conspicuity maps within a unified multi-resolution framework based on wavelet subband analysis. In particular, we apply a wavelet based approach for efficient computation of the topographic feature maps. Given that wavelets and multiresolution theory are naturally connected the usage of wavelet decomposition for mimicking the center surround process in humans is an obvious choice. However, our implementation goes further. We utilize the wavelet decomposition for inline computation of the features (such as orientation angles) that are used to create the topographic feature maps. The bottom-up topographic feature maps and the top-down skin conspicuity map are then combined through a sigmoid function to produce the final saliency map. A prototype of the proposed model was realized through the TMDSDMK642-0E DSP platform as an embedded system allowing real-time operation. For evaluation purposes, in terms of perceived visual quality and video compression improvement, a ROI-based video compression setup was followed. Extended experiments concerning both MPEG-1 as well as low bit-rate MPEG-4 video encoding were conducted showing significant improvement in video compression efficiency without perceived deterioration in visual quality.


2015 ◽  
Vol 1 (4) ◽  
pp. 427
Author(s):  
Marwa Kamel Hussien ◽  
Hameed Abdul-Kareem Younis

Currently, multimedia technology is widely used. Using the video encoding compression technology can save storage space, and also can improve the transmission efficiency of network communications. In video compression methods, the first frame of video is independently compressed as a still image, this is called intra coded frame. The remaining successive frames are compressed by estimating the disparity between two adjacent frames, which is called inter coded frame. In this paper, Discrete Wavelet Transform (DWT) is used powerful tool in video compression. Our coder achieves a good trade-off between compression ratio and quality of the reconstructed video. The motion estimation and compensation, which is an essential part in the compression, is based on segment movements. The disparity between each two frames was estimated by Four Step Search (4SS) Algorithm. The result of the Motion Vector (MV) was encoded into a bit stream by Huffman encoding while the remaining part is compressed like the compression was used in intra frame. Experimental results showed good results in terms of Peak Signal-to-Noise Ratio (PSNR), Compression Ratio (CR), and processing time.


In Video Codecs, The Main Focus Of Researchers Is On Improving Compression Performance To Achieve Higher Compression Rates And To Obtain High Quality Of Video Signals After Encoding At Low Bitrates. There Is Lot Of Satisfactory Research Has Been Done In The Field Video Encoders. Newly Invented HEVC Or H.265 Is A High Efficiency Video Coding Standard Which Improves Video Quality Double For Similar Bit-Rate Than That Of Others Preceders Video Codecs. Here, In This Research Work, We Mainly Focused On Performance And Quality Of Motion JPEG, H.264 And H.265 Using Different Video Encoding Libraries. There Is Lot Of Requirement Of High Efficiency In Video Compression To Handle Complex Computational Video Codecs. Though HEVC Has More Efficiency In Video Compression, Its Cost Is Significant High As Compared To H.264. As Per The Experimentation Conducted, HEVC Shows Best Quality In Video Compression Than That Of H.264. Motion JPEG Required Very Less Time With The Help Of H.264 But, It Generates Worst Encoded Video Quality Using Library Open JPEG. The Encoding Speed Of H.264 Was Slowest Than That Of Other Video Encoders. It Usually Generates Better Video Quality As Compare To Motion JPEG (Kakadu) Encoded Videos. In This Research Paper, We Focused On Video Codec And Its Futuristic Development.


Author(s):  
Mosa Salah ◽  
Ahmad A. Mazhar ◽  
Manar Mizher

Cloud computing is a model of technology that offers access to system resources with advanced level of services ability. These resources are measured reliable, flexible and affordable for several kinds of applications and users. Gaming manufacturing is one filed that expands the profits of cloud computing as numerous new cloud gaming designs have been presented. Many advantages of cloud gaming have exaggerated the success of gaming based on the improvements on traditional online gaming. Though, cloud gaming grieves from several downsides such as the massive amount of needed video processing and the computational complexity required for that. This paper displays the original system drawbacks and develops a new and original algorithm to speed up the encoding process by reduces the computational complexity by exploiting the block type and location. Enhancements on the video codec led to 12.2% speeding up on the over-all encoding time with slight loss of users’ satisfactions. Keywords: Cloud gaming, Computational complexity, Motion estimation, HEVC, Video Encoding


Sign in / Sign up

Export Citation Format

Share Document