Data compression, data association and reduced complexity SLAM techniques for UUVs during transit

Author(s):  
Larry Seiler ◽  
Daqi Lin ◽  
Cem Yuksel

We propose a method to reduce the footprint of compressed data by using modified virtual address translation to permit random access to the data. This extends our prior work on using page translation to perform automatic decompression and deswizzling upon accesses to fixed rate lossy or lossless compressed data. Our compaction method allows a virtual address space the size of the uncompressed data to be used to efficiently access variable-size blocks of compressed data. Compression and decompression take place between the first and second level caches, which allows fast access to uncompressed data in the first level cache and provides data compaction at all other levels of the memory hierarchy. This improves performance and reduces power relative to compressed but uncompacted data. An important property of our method is that compression, decompression, and reallocation are automatically managed by the new hardware without operating system intervention and without storing compression data in the page tables. As a result, although some changes are required in the page manager, it does not need to know the specific compression algorithm and can use a single memory allocation unit size. We tested our method with two sample CPU algorithms. When performing depth buffer occlusion tests, our method reduces the memory footprint by 3.1x. When rendering into textures, our method reduces the footprint by 1.69x before rendering and 1.63x after. In both cases, the power and cycle time are better than for uncompacted compressed data, and significantly better than for accessing uncompressed data.


2016 ◽  
Vol 15 (8) ◽  
pp. 6991-6998
Author(s):  
Idris Hanafi ◽  
Amal Abdel-Raouf

The increasing amount and size of data being handled by data analytic applications running on Hadoop has created a need for faster data processing. One of the effective methods for handling big data sizes is compression. Data compression not only makes network I/O processing faster, but also provides better utilization of resources. However, this approach defeats one of Hadoop’s main purposes, which is the parallelism of map and reduce tasks. The number of map tasks created is determined by the size of the file, so by compressing a large file, the number of mappers is reduced which in turn decreases parallelism. Consequently, standard Hadoop takes longer times to process. In this paper, we propose the design and implementation of a Parallel Compressed File Decompressor (P-Codec) that improves the performance of Hadoop when processing compressed data. P-Codec includes two modules; the first module decompresses data upon retrieval by a data node during the phase of uploading the data to the Hadoop Distributed File System (HDFS). This process reduces the runtime of a job by removing the burden of decompression during the MapReduce phase. The second P-Codec module is a decompressed map task divider that increases parallelism by dynamically changing the map task split sizes based on the size of the final decompressed block. Our experimental results using five different MapReduce benchmarks show an average improvement of approximately 80% compared to standard Hadoop.


2020 ◽  
Vol 2 (2) ◽  
pp. 109-114
Author(s):  
Nadia Fariza Rizky ◽  
Surya Darma Nasution ◽  
Fadlina Fadlina

File size or large files are not only a problem in terms of storage, but also another problem when communication between computers. Data usage with a larger size will take longer transfer times compared to data that has a smaller size. Therefore to overcome this problem you can use data compression. Seeing the compression of data compression, data compression is needed that can reduce the size of the data, so that it gains the advantage of reducing the use of external storage media space, accelerating the process of transferring data between storage media, data utilization used in this study is the elias delta codes algorithm. Compression is a method used to reduce the bit or data from the original result to a new result. Where this compression will be applied into the algorithm, the elias delta codes algorithm. After getting the results of the compression it will be designed into Microsoft Visual Studio 2008


2018 ◽  
Vol 2 (1) ◽  
Author(s):  
Rizka Dwi Pratiwi ◽  
Surya Darma Nasution ◽  
Fadlina Fadlina

The higher activity of data exchange transactions both online and offline raises concerns for some parties, large data sizes result in a waste of memory and slow data transfer and delivery processes. For this reason, a technique is needed to change the size of the data to be smaller. This technique is called compression or better known as data compression. Data compression is a process of converting a set of data into a form of code to save data storage requirements. Fixed Length Binary Encoding Algorithm (FLBE) uses a lossless technique that does not eliminate information at all, only representing some of the same information. The results obtained from the application of Fixed Length Binary Encoding algorithm in the process of compression and decompression include compression capacity, compression ratio and compression and decompression time. In accordance with the results of the experiments carried out, it can be seen that the data originally having a larger size can be compressed well implemented in text files.


Sign in / Sign up

Export Citation Format

Share Document