scholarly journals Generalized nets model of the LPF-algorithm of the crossbar switch node for determining LPF-execution time complexity

2021 ◽  
Author(s):  
Tasho D. Tashev ◽  
Marin B. Marinov ◽  
Radostina P. Tasheva ◽  
Alexander K. Alexandrov
Author(s):  
T.V. Vijay Kumar ◽  
Aloke Ghoshal

Greedy based approach for view selection at each step selects a beneficial view that fits within the space available for view materialization. Most of these approaches are focused around the HRU algorithm, which uses a multidimensional lattice framework to determine a good set of views to materialize. The HRU algorithm exhibits high run time complexity as the number of possible views is exponential with respect to the number of dimensions. The PGA algorithm provides a scalable solution to this problem by selecting views for materialization in polynomial time relative to the number of dimensions. This paper compares the HRU and the PGA algorithm. It was experimentally deduced that the PGA algorithm, in comparison with the HRU algorithm, achieves an improved execution time with lowered memory and CPU usages. The HRU algorithm has an edge over the PGA algorithm on the quality of the views selected for materialization.


2013 ◽  
Vol 427-429 ◽  
pp. 2787-2790
Author(s):  
Jun Guo ◽  
Cang Song Zhang ◽  
Jiao Cui

This paper introduced parallel computing techniques to improve random walk algorithm. The random walk problem was firstly explained by a formal model. And then, the parallel features of random walk algorithm were discussed in detail. A parallel random walk algorithm was proposed and applied to analyze the VLSI power grid. The time complexity and the main factors impacting on the execution time of algorithm were analyzed carefully. The experimental results proved that the parallel computing techniques could improve random walk algorithm effectively.


1993 ◽  
Vol 03 (01) ◽  
pp. 53-58 ◽  
Author(s):  
HESHAM H. ALI ◽  
HESHAM EL-REWINI

Papadimitrjou and Yannakakis showed that unit execution time tasks in interval orders can be scheduled in linear time on N processors when communication cost is ignored. The objective function was to minimize the schedule length. They have also shown that the generalization of this problem to arbitrary execution times is NP- complete . In this paper, we study the problem of scheduling task graphs with communication on N processors when the task graph is an interval order. We prove that this scheduling problem can be solved in polynomial time when the execution cost of the system tasks is identical and equal to the communication cost between any pair of processors. We introduce an algorithm of O(Ne) to minimize the schedule length, where e is the number of arcs in the interval order.


Author(s):  
Baudouin Le Charlier ◽  
Minh Thanh Khong ◽  
Christophe Lecoutre ◽  
Yves Deville

The smart table constraint represents a powerful modeling tool that has been recently introduced. This constraint allows the user to represent compactly a number of well-known (global) constraints and more generally any arbitrarily structured constraints, especially when disjunction is at stake. In many problems, some constraints are given under the basic and simple form of tables explicitly listing the allowed combinations of values. In this paper, we propose an algorithm to convert automatically any (ordinary) table into a compact smart table. Its theoretical time complexity is shown to be quadratic in the size of the input table. Experimental results demonstrate its compression efficiency on many constraint cases while showing its reasonable execution time. It is then shown that using filtering algorithms on the resulting smart table is more efficient than using state of the art filtering algorithms on the initial table.


2020 ◽  
Vol 75 ◽  
pp. 04019
Author(s):  
Oleksandr Mitsa ◽  
Yurii Horoshko ◽  
Serhii Vapnichnyi

The article discusses three approaches to reducing runtime of the programs, which are solutions of Olympiad tasks on computer science, related to sequences or matrices. The first approach is based on the representation of some sequences in matrix form and then the program of calculating the members of the sequence will have asymptotics equal to the time complexity of the exponentiation algorithm and will be O(log (n)). The second approach is to upgrade the known code to obtain significant reduction of the program runtime. This approach is very important to know for scientists who write code for scientific researches and are faced with matrix multiplication operations. The third approach is based on reducing time complexity by search for regularities; the author's task is presented and this approach is used to solve it.


2021 ◽  
Vol 14 (2) ◽  
pp. 431-450
Author(s):  
Armend Salihu ◽  
Fahri Marevci

In this paper, we present an approach for the calculation of rectangular determinants, where in addition to the mathematical formula, we also provide a computer algorithm for their calculation. Firstly, we present a method similar to Sarrus method for calculating the rectangular determinant of the order 2 × 3.  Secondly, we present an approach for calculating the rectangular determinants of order m ×n by adding a row with all elements equal to one (1) in any row, as well as an application of Chio’s rule for calculating the rectangular determinants. Thirdly, we find the time complexity and comparison of the computer execution time of calculation of the rectangular determinant based on the presented algorithms and comparing them with the algorithm based on the Laplace method.


Author(s):  
Maria-Esther Vidal ◽  
Amadís Martínez ◽  
Edna Ruckhaus ◽  
Tomas Lampo ◽  
Javier Sierra

In the context of the Semantic Web, different approaches have been defined to represent RDF documents, and the selected representation affects storage and time complexity of the RDF data recovery and query processing tasks. This chapter addresses the problem of efficiently querying and storing RDF documents, and presents an alternative representation of RDF data, Bhyper, which is based on hypergraphs. Additionally, access and optimization techniques to efficiently execute queries with low cost, are defined on top of this hypergraph based representation. The chapter’s authors have empirically studied the performance of the Bhyper based techniques, and their experimental results show that the proposed hypergraph based formalization reduces the RDF data access time as well as the space needed to store the Bhyper structures, while the query execution time of state-the-of-art RDF engines can be sped up by up to two orders of magnitude.


2014 ◽  
Vol 25 (02) ◽  
pp. 219-246 ◽  
Author(s):  
PAI-CHOU WANG

Reducts preserve original classification properties using minimal number of attributes in a table. Dynamic reducts are the most stable reducts in the process of random sampling of original decision table, and they are proposed to classify unseen cases. Classical reduct generation methods can be applied to compute dynamic reducts but the time complexity of computing dynamic reducts are rarely discussed. This paper proposes a cascading hash function, and dynamic reduct can be derived in O(m2n) time with O(mn) space where m and n are total number of attributes and total number of instances of the table. Core of dynamic reducts is also discussed, and the computation of core of dynamic reducts takes O(mn) time with O(mn) space. Sixteen UCI datasets are applied to compute (F, ε)-dynamic reducts for ε = 1, and results are compared to Rough Set Exploration System (RSES). Results show the execution time on generating dynamic reducts using cascading hash tables is faster than RSES up to 1700 times. Besides the efficiency issue of the algorithms, our algorithms are also very easy to implement and applicable to any system.


Author(s):  
Suresha .M ◽  
. Sandeep

Local features are of great importance in computer vision. It performs feature detection and feature matching are two important tasks. In this paper concentrates on the problem of recognition of birds using local features. Investigation summarizes the local features SURF, FAST and HARRIS against blurred and illumination images. FAST and Harris corner algorithm have given less accuracy for blurred images. The SURF algorithm gives best result for blurred image because its identify strongest local features and time complexity is less and experimental demonstration shows that SURF algorithm is robust for blurred images and the FAST algorithms is suitable for images with illumination.


Sign in / Sign up

Export Citation Format

Share Document