Data structures and the time complexity of ray tracing

1987 ◽  
Vol 3 (4) ◽  
pp. 201-213 ◽  
Author(s):  
Isaac D. Scherson ◽  
Elisha Caspary
2018 ◽  
Vol 12 (11) ◽  
pp. 387
Author(s):  
Evon Abu-Taieh ◽  
Issam AlHadid

Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).


Sorting is an essential conceptin the study of data structures. There are many sorting algorithms that can sort elements in a given array or list. Counting sort is a sorting algorithm that has the best time complexity. However, the counting sort algorithm only works for positive integers. In this paper, an extension of the counting sort algorithm is proposed that can sort real numbers and integers (both positive and negative).


2019 ◽  
Vol 18 (4) ◽  
pp. 449-454
Author(s):  
Ya. E. Romm ◽  
D. A. Chabanyuk

Introduction. Algorithms for the parallel binary tree construction are developed. The algorithms are based on sorting and described in a constructive form. For the Nelement set, the time complexity has T(R) = O(1) and T(R) = O(log2N) estimates, where R = (N2-N)/2 is the number of processors. The tree is built with the uniqueness property. The algorithms are invariant with respect to the input sequence type. The work objective is to develop and study ways of accelerating the process of organizing and transforming the tree-like data structures on the basis of the stable maximum parallel sorting algorithms for their application to the basic operations of information retrieval on databases.Materials and Methods.A one-to-one relation between the input element set and the binary tree built for it is established using a stable address sorting. The sorting provides maximum concurrency, and, in an operator form, establishes a one-to-one mapping of input and output indices. On this basis, methods for the mutual transformation of the binary data structures are being developed.Research Results.An efficient parallel algorithm for constructing a binary tree based on the address sorting with time complexity of T(N2) = O(log2N) is obtained. From the well-known analogues, the algorithm differs in structure and logarithmic estimation of time complexity, which makes it possible to achieve the acceleration of O(Nα), α≥1 order analogues. As an advanced version, an algorithm modification, which provides the maximum parallel construction of the binary tree based on a stable address sorting and a priori calculation of the stored subtree root indices is suggested. The algorithm differs in structure and estimation of T(1) = O(1) time complexity. A similar estimate is achieved in a sequential version of the modified algorithm, which allows obtaining the acceleration of known analogs O(Nα), α>1 order.Discussion and Conclusions.The results obtained are focused on the creation of effective methods for the dynamic database processing. The proposed methods and algorithms can form an algorithmic basis for an advanced deterministic search on the relational databases and information systems.


2018 ◽  
Vol 12 (11) ◽  
pp. 406
Author(s):  
Evon Abu-Taieh ◽  
Issam AlHadid

Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).


2015 ◽  
Vol 57 (5) ◽  
pp. 159-176 ◽  
Author(s):  
Alfonso Breglia ◽  
Amedeo Capozzoli ◽  
Claudio Curcio ◽  
Angelo Liseno

Sign in / Sign up

Export Citation Format

Share Document