data compaction
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 1)

H-INDEX

7
(FIVE YEARS 0)

Author(s):  
Tiantian Zhu ◽  
Jiayu Wang ◽  
Linqi Ruan ◽  
Chunlin Xiong ◽  
Jinkai Yu ◽  
...  

Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 919
Author(s):  
Grzegorz Ulacha ◽  
Ryszard Stasiński ◽  
Cezary Wernik

In this paper, the most efficient (from data compaction point of view) and current image lossless coding method is presented. Being computationally complex, the algorithm is still more time efficient than its main competitors. The presented cascaded method is based on the Weighted Least Square (WLS) technique, with many improvements introduced, e.g., its main stage is followed by a two-step NLMS predictor ended with Context-Dependent Constant Component Removing. The prediction error is coded by a highly efficient binary context arithmetic coder. The performance of the new algorithm is compared to that of other coders for a set of widely used benchmark images.


Author(s):  
R. Raj Kumar ◽  
P. Viswanath ◽  
C. Shoba Bindu

A large dataset is not preferable as it increases computational burden on the methods operating over it. Given the Large dataset, it is always interesting that whether one can generate smaller dataset which is a subset or a set (cardinality should be less when compare to original dataset) of extracted patterns from that large dataset. The patterns in the subset are representatives of the patterns in the original dataset. The subset (set) of representing patterns forms the Prototype set. Forming Prototype set is broadly categorized into two types. 1) Prototype set which is a proper subset of original dataset. 2) Prototype set which contains patterns extracted by using the patterns in the original dataset. This process of reducing the training set can also be done with the features of the training set. The authors discuss the reduction of the datasets in the both directions. These methods are well known as Data Compaction Techniques.


Data Mining ◽  
2013 ◽  
pp. 734-750
Author(s):  
T. Ravindra Babu ◽  
M. Narasimha Murty ◽  
S. V. Subrahmanya

Data Mining deals with efficient algorithms for dealing with large data. When such algorithms are combined with data compaction, they would lead to superior performance. Approaches to deal with large data include working with representatives of data instead of entire data. The representatives should preferably be generated with minimal data scans. In the current chapter we discuss working with methods of lossy and non-lossy data compression methods combined with clustering and classification of large datasets. We demonstrate the working of such schemes on two large data sets.


Sign in / Sign up

Export Citation Format

Share Document