scholarly journals Primality test and primes enumeration using odd numbers indexation

2020 ◽  
Vol 8 (2) ◽  
pp. 11-41
Author(s):  
Marc Wolf ◽  
WOLF François

Odd numbers can be indexed by the map k(n)=(n-3)⁄2,n∈2N+3. We first propose a basic primality test using this index function that was first introduced in [8]. Input size of operations is reduced which improves computational time by a constant. We then apply similar techniques to Atkin’s prime-numbers sieve which uses modulus operations and finally to Pritchard’s wheel sieve, in both case yielding similar results.

2018 ◽  
Vol 2 (1) ◽  
pp. 45-52
Author(s):  
Mohammad Andri Budiman ◽  
Dian Rachmawati

Abstract. The security of the RSA cryptosystem is directly proportional to the size of its modulus, n. The modulus n is a multiplication of two very large prime numbers, notated as p and q. Since modulus n is public, a cryptanalyst can use factorization algorithms such as Euler’s and Pollard’s algorithms to derive the private keys, p and q. Brute force is an algorithm that searches a solution to a problem by generating all the possible candidate solutions and testing those candidates one by one in order to get the most relevant solution. Random search is a numerical optimization algorithm that starts its search by generating one candidate solution randomly and iteratively compares it with other random candidate solution in order to get the most suitable solution. This work aims to compare the performance of brute force algorithm and random search in factoring the RSA modulus into its two prime factors by experimental means in Python programming language. The primality test is done by Fermat algorithm and the sieve of Eratosthenes.


2001 ◽  
Vol 8 (45) ◽  
Author(s):  
Ivan B. Damgård ◽  
Gudmund Skovbjerg Frandsen

We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT is well-suited for generating large, random prime numbers since on a random input number, it takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability. EQFT extends QFT by verifying additional algebraic properties related to the existence of elements of order 3 and 4. We obtain a simple closed expression that upper bounds the probability of acceptance for any input number. This in turn allows us to give strong bounds on the average-case behaviour of the test: consider the algorithm that repeatedly chooses random odd k bit numbers, subjects them to t iterations of our test and outputs the first one found that passes all tests. We obtain numeric upper bounds for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2^{-143} for k=500, t=2 . Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point. While EQFT is slower than the average case on a small set of inputs, we present a variant that is always fast, i.e. takes time about 2 Miller-Rabin tests. The variant has slightly larger worst case error probability than EQFT, but still improves on previous proposed tests.


2018 ◽  
Vol 6 (3) ◽  
pp. 316-326
Author(s):  
Neha Singh ◽  
Tathagata Ray ◽  
Chandu Parimi ◽  
Srivastava Kuchibhotla

Abstract This paper describes a framework to generate an unstructured Delaunay mesh of a two-dimensional domain whose boundary is specified by a point cloud data (PCD). The assumption is that the PCD is sampled from a smooth 1-manifold without a boundary definition and is significantly dense (at least ∊-sampled where ∊<1). Presently meshing of such a domain requires two explicit steps, namely the extraction of model definition from the PCD and the use of model definition to guide the unstructured mesh generation. For a densely sampled PCD, the curve reconstruction process is dependent on the size of input PCD and can become a time-consuming overhead. We propose an optimized technique that bypasses the explicit step of curve reconstruction by implicit access to the model information from a well-sampled PCD. A mesh thus generated will be optimal, as the fineness of the mesh is not dictated by the sampling of PCD, but only the geometric complexity of the underlying curve. The implementation and experiments of the proposed framework show significant improvement in expense over the traditional methodology. The main contribution of this paper is the circumvention of the explicit time-consuming step of boundary computation which is a function of the PCD sampling size and a direct generation of a mesh whose complexity is dictated by the geometry of the domain. Highlights The algorithm gives a size optimal triangular mesh directly from a point cloud data. Intermediate step of model definition can be skipped completely. Generated mesh is independent of the number of points in the data. Mesh size and computational time depend on geometric complexity of the curve. For dense samples, this method is very efficient compared to traditional methods.


SinkrOn ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 293
Author(s):  
Nurul Khairina

Prime numbers are unique numbers. Prime numbers are numbers that only have a dividing factor consisting of numbers 1 and the number itself. The prime numbers from 1 to n that are relatively small can be generated manually, but a prime number generator algorithm is needed to generate prime numbers on a large scale. This study compares three prime number generator algorithms, namely: The Sieve of Eratosthenes, The Sieve of Atkins, and The Sieve of Sundaram. These three sieve algorithms have their own differences in generating prime numbers. The Sieve of Eratosthenes uses a simpler method by crossing multiples of prime numbers and marking them as non-prime numbers. The Sieve of Atkins uses several requirements for quadratic equations and modulus in determining prime numbers. The Sieve of Sundaram has an algorithm similar to The Sieve of Atkins, but there are requirements for linear equations to determine prime numbers. This study aims to see a comparison of these three algorithms in terms of accuracy and speed in generating prime numbers on a large scale. The results of this study indicate The Sieve of Eratosthenes, Atkins and Sundaram algorithms can generate large numbers of prime numbers with good accuracy, this was tested by the Fermat Primality Test Algorithm. The conclusion that can be drawn from this study, The Sieve of Eratosthenes have a faster time to generate prime numbers on a large scale than the other two algorithms.


2014 ◽  
Vol 12 (3) ◽  
pp. 3338-3346
Author(s):  
Abu Asaduzzaman ◽  
Anindya Maiti ◽  
Chok Meng Yip

There are great interests in understanding the manner by which the prime numbers are distributed throughout the integers. Prime numbers are being used in secret codes for more than 60 years now. Computer security authorities use extremely large prime numbers when they devise cryptographs, like RSA (short for Rivest, Shamir, and Adleman) algorithm, for protecting vital information that is transmitted between computers. There are many primality testing algorithms including mathematical models and computer programs. However, they are very time consuming when the given number n is very big or n→∞. In this paper, we propose a novel parallel computing model based on a deterministic algorithm using central processing unit (CPU) / general-purpose graphics processing unit (GPGPU) systems, which determines whether an input number is prime or composite much faster. We develop and implement the proposed algorithm using a system with a 8-core CPU and a 448-core GPGPU. Experimental results indicate that upto 94.35x speedup can be achieved for 21-digit decimal numbers.


2021 ◽  
Author(s):  
Christoph Wolmersdorfer

Abstract The fundamental importance of prime numbers for mathematics is that all other natural numbers can be represented as a unique product of prime numbers[1]. For centuries, mathematicians have been trying to find an order in the occurrence of prime numbers. Since Riemann´s paper on the number of primes under a given size, the distribution of the primes is assumed to be random[2]. Here, I show that prime numbers are not randomly distributed. Their positions are determined by the order of occurrence of the nonprimes of types 6n+5 and 6n+1. By using parametric sine functions, I will show here that all assumptions known today regarding the distribution of prime numbers larger than 3 are wrong. In particular, this refutes Riemann´s hypothesis of random distribution of prime numbers. Furthermore, I will show an exact primality test based on these parametric sine functions, which only uses one parameter.


2014 ◽  
Vol 100 (3) ◽  
pp. 14-16
Author(s):  
Abdelilah Aouessare ◽  
Abdeslam El haddouchi ◽  
Mohamed Essaaidi

2021 ◽  
Author(s):  
Christoph Wolmersdorfer

Abstract The fundamental importance of prime numbers for mathematics is that all other natural numbers can be represented as a unique product of prime numbers[1]. For centuries, mathematicians have been trying to find an order in the occurrence of prime numbers. Since Riemann´s paper on the number of primes under a given size, the distribution of the primes is assumed to be random[2]. Here, I show that prime numbers are not randomly distributed. Their positions are determined by the order of occurrence of the nonprimes of types 6n+5 and 6n+1. By using parametric sine functions, I will show here that all assumptions known today regarding the distribution of prime numbers larger than 3 are wrong. In particular, this refutes Riemann´s hypothesis of random distribution of prime numbers. Furthermore, I will show an exact primality test based on these parametric sine functions, which only needs the calculation modes +, -, · , : and only uses one parameter. There is no such deterministic primality test existing until today[3] [4].


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Fabio L. Traversa ◽  
Pietro Cicotti ◽  
Forrest Sheldon ◽  
Massimiliano Di Ventra

Optimization problems pervade essentially every scientific discipline and industry. A common form requires identifying a solution satisfying the maximum number among a set of many conflicting constraints. Often, these problems are particularly difficult to solve, requiring resources that grow exponentially with the size of the problem. Over the past decades, research has focused on developing heuristic approaches that attempt to find an approximation to the solution. However, despite numerous research efforts, in many cases even approximations to the optimal solution are hard to find, as the computational time for further refining a candidate solution also grows exponentially with input size. In this paper, we show a noncombinatorial approach to hard optimization problems that achieves an exponential speed-up and finds better approximations than the current state of the art. First, we map the optimization problem into a Boolean circuit made of specially designed, self-organizing logic gates, which can be built with (nonquantum) electronic elements with memory. The equilibrium points of the circuit represent the approximation to the problem at hand. Then, we solve its associated nonlinear ordinary differential equations numerically, towards the equilibrium points. We demonstrate this exponential gain by comparing a sequential MATLAB implementation of our solver with the winners of the 2016 Max-SAT competition on a variety of hard optimization instances. We show empirical evidence that our solver scales linearly with the size of the problem, both in time and memory, and argue that this property derives from the collective behavior of the simulated physical circuit. Our approach can be applied to other types of optimization problems, and the results presented here have far-reaching consequences in many fields.


2021 ◽  
Author(s):  
Christoph Wolmersdorfer

Abstract The fundamental importance of prime numbers for mathematics is that all other natural numbers can be represented as a unique product of prime numbers[1]. For centuries, mathematicians have been trying to find an order in the occurrence of prime numbers. Since Riemann´s paper on the number of primes under a given size, the distribution of the primes is assumed to be random[2]. Here, I show that prime numbers are not randomly distributed using three parametric sine functions. Their positions are determined by the order of occurrence of the nonprimes of types 6n+5 and 6n+1. Furthermore, I will show an exact primality test using these three parametric sine functions.


Sign in / Sign up

Export Citation Format

Share Document