scholarly journals Quantitative Evaluation of Dense Skeletons for Image Compression

Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 274
Author(s):  
Jieying Wang ◽  
Maarten Terpstra ◽  
Jiří Kosinka ◽  
Alexandru Telea

Skeletons are well-known descriptors used for analysis and processing of 2D binary images. Recently, dense skeletons have been proposed as an extension of classical skeletons as a dual encoding for 2D grayscale and color images. Yet, their encoding power, measured by the quality and size of the encoded image, and how these metrics depend on selected encoding parameters, has not been formally evaluated. In this paper, we fill this gap with two main contributions. First, we improve the encoding power of dense skeletons by effective layer selection heuristics, a refined skeleton pixel-chain encoding, and a postprocessing compression scheme. Secondly, we propose a benchmark to assess the encoding power of dense skeletons for a wide set of natural and synthetic color and grayscale images. We use this benchmark to derive optimal parameters for dense skeletons. Our method, called Compressing Dense Medial Descriptors (CDMD), achieves higher-compression ratios at similar quality to the well-known JPEG technique and, thereby, shows that skeletons can be an interesting option for lossy image encoding.

Author(s):  
Saif alZahir ◽  
Syed M. Naqvi

In this paper, the authors present a binary image compression scheme that can be used either for lossless or lossy compression requirements. This scheme contains five new contributions. The lossless component of the scheme partitions the input image into a number of non-overlapping rectangles using a new line-by-line method. The upper-left and the lower-right vertices of each rectangle are identified and the coordinates of which are efficiently encoded using three methods of representation and compression. The lossy component, on the other hand, provides higher compression through two techniques. 1) It reduces the number of rectangles from the input image using our mathematical regression models. These mathematical models guarantees image quality so that rectangular reduction should not produce visual distortion in the image. The mathematical models have been obtained through subjective tests and regression analysis on a large set of binary images. 2) Further compression gain is achieved through discarding isolated pixels and 1-pixel rectangles from the image. Simulation results show that the proposed schemes provide significant improvements over previously published work for both the lossy and the lossless components.


Sensor Review ◽  
2019 ◽  
Vol 39 (4) ◽  
pp. 542-553
Author(s):  
Shujing Zhang ◽  
Manyu Zhang ◽  
Yujie Cui ◽  
Xingyue Liu ◽  
Bo He ◽  
...  

Purpose This paper aims to propose a fast machine compression scheme, which can solve the problem of low-bandwidth transmission for underwater images. Design/methodology/approach This fast machine compression scheme mainly consists of three stages. Firstly, raw images are fed into the image pre-processing module, which is specially designed for underwater color images. Secondly, a divide-and-conquer (D&C) image compression framework is developed to divide the problem of image compression into a manageable size. And extreme learning machine (ELM) is introduced to substitute for principal component analysis (PCA), which is a traditional transform-based lossy compression algorithm. The execution time of ELM is very short, thus the authors can compress the images at a much faster speed. Finally, underwater color images can be recovered from the compressed images. Findings Experiment results show that the proposed scheme can not only compress the images at a much faster speed but also maintain the acceptable perceptual quality of reconstructed images. Originality/value This paper proposes a fast machine compression scheme, which combines the traditional PCA compression algorithm with the ELM algorithm. Moreover, a pre-processing module and a D&C image compression framework are specially designed for underwater images.


2021 ◽  
Vol 5 (2) ◽  
pp. 31
Author(s):  
Olga Svynchuk ◽  
Oleg Barabash ◽  
Joanna Nikodem ◽  
Roman Kochan ◽  
Oleksandr Laptiev

The rapid growth of geographic information technologies in the field of processing and analysis of spatial data has led to a significant increase in the role of geographic information systems in various fields of human activity. However, solving complex problems requires the use of large amounts of spatial data, efficient storage of data on on-board recording media and their transmission via communication channels. This leads to the need to create new effective methods of compression and data transmission of remote sensing of the Earth. The possibility of using fractal functions for image processing, which were transmitted via the satellite radio channel of a spacecraft, is considered. The information obtained by such a system is presented in the form of aerospace images that need to be processed and analyzed in order to obtain information about the objects that are displayed. An algorithm for constructing image encoding–decoding using a class of continuous functions that depend on a finite set of parameters and have fractal properties is investigated. The mathematical model used in fractal image compression is called a system of iterative functions. The encoding process is time consuming because it performs a large number of transformations and mathematical calculations. However, due to this, a high degree of image compression is achieved. This class of functions has an interesting property—knowing the initial sets of numbers, we can easily calculate the value of the function, but when the values of the function are known, it is very difficult to return the initial set of values, because there are a huge number of such combinations. Therefore, in order to de-encode the image, it is necessary to know fractal codes that will help to restore the raster image.


2012 ◽  
Vol 488-489 ◽  
pp. 1587-1591
Author(s):  
Amol G. Baviskar ◽  
S. S. Pawale

Fractal image compression is a lossy compression technique developed in the early 1990s. It makes use of the local self-similarity property existing in an image and finds a contractive mapping affine transformation (fractal transform) T, such that the fixed point of T is close to the given image in a suitable metric. It has generated much interest due to its promise of high compression ratios with good decompression quality. Image encoding based on fractal block-coding method relies on assumption that image redundancy can be efficiently exploited through block-self transformability. It has shown promise in producing high fidelity, resolution independent images. The low complexity of decoding process also suggested use in real time applications. The high encoding time, in combination with patents on technology have unfortunately discouraged results. In this paper, we have proposed efficient domain search technique using feature extraction for the encoding of fractal image which reduces encoding-decoding time and proposed technique improves quality of compressed image.


Sign in / Sign up

Export Citation Format

Share Document