scholarly journals A Comparison of the DES and Dömösi Cryptosystems

Triangle ◽  
2018 ◽  
pp. 81
Author(s):  
Zoltan Pal Mecsei

In this paper we compare the well known DES cryptosystem with the recently introduced Dömösi system, which is based on nite automata. We do a time complexity analysis on both algrithms. We show that without making use of an auxiliary matrix the Dömösi cryptosystem is slower than DES. However, the use of auxiliary matrices makes the former perform better than its well known counterpart for some block lengths.

2021 ◽  
Vol 7 ◽  
pp. e727
Author(s):  
Eko Hadiyono Riyadi ◽  
Agfianto Eko Putra ◽  
Tri Kuntoro Priyambodo

Background Data transmissions using the DNP3 protocol over the internet in SCADA systems are vulnerable to interruption, interception, fabrication, and modification through man-in-the-middle (MITM) attacks. This research aims to improve the security of DNP3 data transmissions and protect them from MITM attacks. Methods This research describes a proposed new method of improving DNP3 security by introducing BRC4 encryption. This combines Beaufort encryption, in which plain text is encrypted by applying a poly-alphabetic substitution code based on the Beaufort table by subtracting keys in plain text, and RC4 encryption, a stream cipher with a variable-length key algorithm. This research contributes to improving the security of data transmission and accelerating key generation. Results Tests are carried out by key space analysis, correlation coefficient analysis, information entropy analysis, visual analysis, and time complexity analysis.The results show that to secure encryption processes from brute force attacks, a key of at least 16 characters is necessary. IL data correlation values were IL1 = −0.010, IL2 = 0.006, and IL3 = 0.001, respectively, indicating that the proposed method (BRC4) is better than the Beaufort or RC4 methods in isolation. Meanwhile, the information entropy values from IL data are IL1 = 7.84, IL2 = 7.98, and IL3 = 7.99, respectively, likewise indicating that the proposed method is better than the Beaufort or RC4 methods in isolation. Both results also show that the proposed method is secure from MITM attacks. Visual analysis, using a histogram, shows that ciphertext is more significantly distributed than plaintext, and thus secure from MITM attacks. The time complexity analysis results show that the proposed method algorithm is categorized as linear complexity.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Lin Ding ◽  
Chenhui Jin ◽  
Jie Guan ◽  
Qiuyan Wang

Loiss is a novel byte-oriented stream cipher proposed in 2011. In this paper, based on solving systems of linear equations, we propose an improved Guess and Determine attack on Loiss with a time complexity of 2231and a data complexity of 268, which reduces the time complexity of the Guess and Determine attack proposed by the designers by a factor of 216. Furthermore, a related key chosenIVattack on a scaled-down version of Loiss is presented. The attack recovers the 128-bit secret key of the scaled-down Loiss with a time complexity of 280, requiring 264chosenIVs. The related key attack is minimal in the sense that it only requires one related key. The result shows that our key recovery attack on the scaled-down Loiss is much better than an exhaustive key search in the related key setting.


2014 ◽  
Vol 2014 ◽  
pp. 1-19 ◽  
Author(s):  
Byoung-Il Kim ◽  
Jin Hong

Cryptanalytic time memory tradeoff algorithms are tools for inverting one-way functions, and they are used in practice to recover passwords that restrict access to digital documents. This work provides an accurate complexity analysis of the perfect table fuzzy rainbow tradeoff algorithm. Based on the analysis results, we show that the lesser known fuzzy rainbow tradeoff performs better than the original rainbow tradeoff, which is widely believed to be the best tradeoff algorithm. The fuzzy rainbow tradeoff can attain higher online efficiency than the rainbow tradeoff and do so at a lower precomputation cost.


2021 ◽  
pp. 146808742110397
Author(s):  
Haotian Chen ◽  
Kun Zhang ◽  
Kangyao Deng ◽  
Yi Cui

Real-time simulation models play an important role in the development of engine control systems. The mean value model (MVM) meets real-time requirements but has limited accuracy. By contrast, a crank-angle resolved model, such as the filling -and-empty model, can be used to simulate engine performance with high accuracy but cannot meet real-time requirements. Time complexity analysis is used to develop a real-time crank-angle resolved model with high accuracy in this study. A method used in computer science, program static analysis, is used to theoretically determine the computational time for a multicylinder engine filling-and-empty (crank-angle resolved) model. Then, a prediction formula for the engine cycle simulation time is obtained and verified by a program run test. The influence of the time step, program structure, algorithm and hardware on the cycle simulation time are analyzed systematically. The multicylinder phase shift method and a fast calculation method for the turbocharger characteristics are used to improve the crank-angle resolved filling-and-empty model to meet real-time requirements. The improved model meets the real-time requirement, and the real-time factor is improved by 3.04 times. A performance simulation for a high-power medium-speed diesel engine shows that the improved model has a max error of 5.76% and a real-time factor of 3.93, which meets the requirement for a hardware-in-the-loop (HIL) simulation during control system development.


Generally, classification accuracy is very important to gene processing and selection and cancer classification. It is needed to achieve better cancer treatments and improve medical drug assignments. However, the time complexity analysis will enhance the application's significance. To answer the research questions in Chapter 1, several case studies have been implemented (see Chapters 4 and 5), each was essential to sustain the methodologies discussed in Chapter 3. The study used a colon-cancer dataset comprising 2000 genes. The best search algorithm, GA, showed high performance with a good efficient time complexity. However, both DTs and SVMs showed the best classification contribution with reference to performance accuracy and time efficiency. However, it is difficult to apply a completely fair comparative study because existing algorithms and methods were tested by different authors to reflect the effectiveness and powerful of their own methods.


2012 ◽  
Vol 11 (04) ◽  
pp. 1250021 ◽  
Author(s):  
HE WEN ◽  
LASZLO B. KISH

Although noise-based logic shows potential advantages of reduced power dissipation and the ability of large parallel operations with low hardware and time complexity the question still persist: Is randomness really needed out of orthogonality? In this Letter, after some general thermodynamical considerations, we show relevant examples where we compare the computational complexity of logic systems based on orthogonal noise and sinusoidal signals, respectively. The conclusion is that in certain special-purpose applications noise-based logic is exponentially better than its sinusoidal version: Its computational complexity can be exponentially smaller to perform the same task.


Author(s):  
Johan Jansson ◽  
Imre Horváth ◽  
Joris S. M. Vergeest

Abstract Previously, we have described the theory of a general mechanics model for non-rigid solids (Jansson, Vergeest, 2000). In this paper, we will describe and analyze the implementation, i.e. algorithms and analysis of their time complexity. We will reason that a good (better than O(n2), where n is the number of elements in the system) time complexity is mandatory for a scalable real time simulation system. We will show that, in simplified form, all our algorithms are O(n lg n). We have not been able to formally analyze the algorithms in non-simplified form, we will however informally discuss the expected performance. The entire system will be empirically shown to perform slightly worse than O(n lg n), for a specific range and typical input. We will also present a working prototype implementation and show it can be used for real time evaluation of reasonably complex systems. Finally we will reason about how such a system can be used in the conceptual design community as a simulation of traditional design tools.


2011 ◽  
Vol 22 (05) ◽  
pp. 1161-1185
Author(s):  
ABUSAYEED SAIFULLAH ◽  
YUNG H. TSIN

A self-stabilizing algorithm is a distributed algorithm that can start from any initial (legitimate or illegitimate) state and eventually converge to a legitimate state in finite time without being assisted by any external agent. In this paper, we propose a self-stabilizing algorithm for finding the 3-edge-connected components of an asynchronous distributed computer network. The algorithm stabilizes in O(dnΔ) rounds and every processor requires O(n log Δ) bits, where Δ(≤ n) is an upper bound on the degree of a node, d(≤ n) is the diameter of the network, and n is the total number of nodes in the network. These time and space complexity are at least a factor of n better than those of the previously best-known self-stabilizing algorithm for 3-edge-connectivity. The result of the computation is kept in a distributed fashion by assigning, upon stabilization of the algorithm, a component identifier to each processor which uniquely identifies the 3-edge-connected component to which the processor belongs. Furthermore, the algorithm is designed in such a way that its time complexity is dominated by that of the self-stabilizing depth-first search spanning tree construction in the sense that any improvement made in the latter automatically implies improvement in the time complexity of the algorithm.


Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 97
Author(s):  
Antoine Genitrini ◽  
Martin Pépin

In the context of combinatorial sampling, the so-called “unranking method” can be seen as a link between a total order over the objects and an effective way to construct an object of given rank. The most classical order used in this context is the lexicographic order, which corresponds to the familiar word ordering in the dictionary. In this article, we propose a comparative study of four algorithms dedicated to the lexicographic unranking of combinations, including three algorithms that were introduced decades ago. We start the paper with the introduction of our new algorithm using a new strategy of computations based on the classical factorial numeral system (or factoradics). Then, we present, in a high level, the three other algorithms. For each case, we analyze its time complexity on average, within a uniform framework, and describe its strengths and weaknesses. For about 20 years, such algorithms have been implemented using big integer arithmetic rather than bounded integer arithmetic which makes the cost of computing some coefficients higher than previously stated. We propose improvements for all implementations, which take this fact into account, and we give a detailed complexity analysis, which is validated by an experimental analysis. Finally, we show that, even if the algorithms are based on different strategies, all are doing very similar computations. Lastly, we extend our approach to the unranking of other classical combinatorial objects such as families counted by multinomial coefficients and k-permutations.


Sign in / Sign up

Export Citation Format

Share Document