Quantum period finding based on the Bernstein-Vazirani algorithm

2020 ◽  
Vol 20 (1&2) ◽  
pp. 65-84
Author(s):  
Xuexuan Hao ◽  
Fengrong Zhang ◽  
Yongzhuang Wei ◽  
Yong Zhou

Quantum period finding algorithms have been used to analyze symmetric cryptography. For instance, the 3-round Feistel construction and the Even-Mansour construction could be broken in polynomial time by using quantum period finding algorithms. In this paper, we firstly provide a new algorithm for finding the nonzero period of a vectorial function with O(n) quantum queries, which uses the Bernstein-Vazirani algorithm as one step of the subroutine. Afterwards, we compare our algorithm with Simon's algorithm. In some scenarios, such as the Even-Mansour construction and the function satisfying Simon's promise, etc, our algorithm is more efficient than Simon's algorithm with respect to the tradeoff between quantum memory and time. On the other hand, we combine our algorithm with Grover's algorithm for the key-recovery attack on the FX construction. Compared with the Grover-Meets-Simon algorithm proposed by Leander and May at Asiacrypt 2017, the new algorithm could save the quantum memory.

1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.


Author(s):  
Petr Savický ◽  
Petr Kučera

A matched formula is a CNF formula whose incidence graph admits a matching which matches a distinct variable to every clause. Such a formula is always satisfiable. Matched formulas are used, for example, in the area of parameterized complexity. We prove that the problem of counting the number of the models (satisfying assignments) of a matched formula is #P-complete. On the other hand, we define a class of formulas generalizing the matched formulas and prove that for a formula in this class one can choose in polynomial time a variable suitable for splitting the tree for the search of the models of the formula. As a consequence, the models of a formula from this class, in particular of any matched formula, can be generated sequentially with a delay polynomial in the size of the input. On the other hand, we prove that this task cannot be performed efficiently for linearly satisfiable formulas, which is a generalization of matched formulas containing the class considered above.


2021 ◽  
Vol 12 ◽  
Author(s):  
Rafael Bayarri-Olmos ◽  
Ida Jarlhelt ◽  
Laust Bruun Johnsen ◽  
Cecilie Bo Hansen ◽  
Charlotte Helgstrand ◽  
...  

The recent identification and rise to dominance of the P.1 and B.1.351 SARS-CoV-2 variants have brought international concern because they may confer fitness advantages. The same three positions in the receptor-binding domain (RBD) are affected in both variants, but where the 417 substitution differs, the E484K/N501Y have co-evolved by convergent evolution. Here we characterize the functional and immune evasive consequences of the P.1 and B.1.351 RBD mutations. E484K and N501Y result in gain-of-function with two different outcomes: The N501Y confers a ten-fold affinity increase towards ACE-2, but a modest antibody evasion potential of plasma from convalescent or vaccinated individuals, whereas the E484K displays a significant antibody evasion capacity without a major impact on affinity. On the other hand, the two different 417 substitutions severely impair the RBD/ACE-2 affinity, but in the combined P.1 and B.1.351 RBD variants, this effect is partly counterbalanced by the effect of the E484K and N501Y. Our results suggest that the combination of these three mutations is a two-step forward and one step back in terms of viral fitness.


Author(s):  
Naser T Sardari

Abstract By assuming some widely believed arithmetic conjectures, we show that the task of accepting a number that is representable as a sum of $d\geq 2$ squares subjected to given congruence conditions is NP-complete. On the other hand, we develop and implement a deterministic polynomial-time algorithm that represents a number as a sum of four squares with some restricted congruence conditions, by assuming a polynomial-time algorithm for factoring integers and Conjecture 1.1. As an application, we develop and implement a deterministic polynomial-time algorithm for navigating Lubotzky, Phillips, Sarnak (LPS) Ramanujan graphs, under the same assumptions.


Author(s):  
Shun-Feng Su ◽  
Sou-Horng Li

Forecasting data from a time series is to make predictions for the future from available data. Thus, such a problem can be viewed as a traditional data mining problem because it is to extract rules for prediction from available data. There are two kinds of forecasting approaches. Most traditional forecasting approaches are based on all available data including the nearest data and far away data with respect to the time. These approaches are referred to as the global prediction scheme in our study. On the other hand, there also exist some prediction approaches that only construct their prediction model based on the most recent data. Such approaches are referred to as the local prediction schemes. Those local prediction approaches seem to have good prediction ability in some cases but due to their local characteristics, they usually fail in general for long term prediction. In this chapter, the authors shall detail those ideas and use several commonly used models, especially those model free estimators, such as neural networks, fuzzy systems, grey systems, etc., to explain their effects. Another issues discussed in the chapter is about multi-step predictions. From the author’s study, it can be found that those often-used global prediction schemes can have fair performance in both one-step-ahead predictions and multi-step predictions. On the other hand, good local prediction schemes can have better performance in the one-step-ahead prediction when compared to those global prediction schemes, but usually have awful performance for multi-step predictions. In this chapter, the authors shall introduce several approaches of combining local and global prediction results to improve the prediction performance.


2004 ◽  
Vol 13 (03) ◽  
pp. 469-485 ◽  
Author(s):  
RAJDEEP NIYOGI

Planning with temporally extended goals has recently been the focus of much attention to researchers in the planning community. We study a class of planning goals where in addition to a main goal there exist other goals, which we call auxiliary goals, that act as constraints to the main goal. Both these type of goals can, in general, be a temporally extended goal. Linear temporal logic (LTL) is inadequate for specification of the overall goals of this type, although, for some situations, it is capable of expressing them separately. A branching-time temporal logic, like CTL, on the other hand, can be used for specifying these goals. However, we are interested in situations where an auxiliary goal has to be satisfiable within a fixed bound. We show that CTL becomes inadequate for capturing these situations. We bring out an existing logic, called min-max CTL, and show how it can effectively be used for the planning purpose. We give a logical framework for expressing the overall planning goals. We propose a sound and complete planning procedure that incorporates a model checking technology. Doing so, we can answer such planning queries as plan existence at the onset besides producing an optimal plan (if any) in polynomial time.


2021 ◽  
Vol 13 (1) ◽  
pp. 108-122
Author(s):  
Nobertus Ribut Santoso

Public relations professionals have been dominated by females since they have good communication skills and abilities in persuading and engaging in the conversation and listening the stakeholders to build and harmonize relationships them. However, male public relations practitioners dominate in the top positions since they have been participated in the managerial roles while female are in the technical roles. In the organization, female public relations practitioners face inequalities in social, professional, and economic areas and they also find it difficult to achieve higher position because the traditional patriarchy is still strongly practiced. It harder for them to break this barrier. Family and children, on the other hand, become big considerations for females to climb the higher position since it will give bigger responsibilities. Moreover, the massive development of digital technologies provides more opportunities for female public relations professionals to intensively engage with the stakeholders. On the other hand, these technologies bring privilege for males since they are more digital technical skills. To compete with males in digital public relations, females should enhance their digital skills, wisely manage their time, learn to take new challenges making them one step ahead, and actively participate in every organizational activity to voice their ideas and straighten up false assumptions and misconceptions about females.  Meanwhile, males should be versatile public relations professionals in the digital era by combining masculine and feminine values to find the best public relations practices.


2016 ◽  
Vol 56 ◽  
pp. 269-327 ◽  
Author(s):  
Maximilian Fickert ◽  
Joerg Hoffmann ◽  
Marcel Steinmetz

Recent work has shown how to improve delete relaxation heuristics by computing relaxed plans, i.e., the hFF heuristic, in a compiled planning task PiC which represents a given set C of fact conjunctions explicitly. While this compilation view of such partial delete relaxation is simple and elegant, its meaning with respect to the original planning task is opaque, and the size of PiC grows exponentially in |C|. We herein provide a direct characterization, without compilation, making explicit how the approach arises from a combination of the delete-relaxation with critical-path heuristics. Designing equations characterizing a novel view on h+ on the one hand, and a generalized version hC of hm on the other hand, we show that h+(PiC) can be characterized in terms of a combined hcplus equation. This naturally generalizes the standard delete-relaxation framework: understanding that framework as a relaxation over singleton facts as atomic subgoals, one can refine the relaxation by using the conjunctions C as atomic subgoals instead. Thanks to this explicit view, we identify the precise source of complexity in hFF(PiC), namely maximization of sets of supported atomic subgoals during relaxed plan extraction, which is easy for singleton-fact subgoals but is NP-complete in the general case. Approximating that problem greedily, we obtain a polynomial-time hCFF version of hFF(PiC), superseding the PiC compilation, and superseding the modified PiCce compilation which achieves the same complexity reduction but at an information loss. Experiments on IPC benchmarks show that these theoretical advantages can translate into empirical ones.


Sign in / Sign up

Export Citation Format

Share Document