scholarly journals Generating Models of a Matched Formula With a Polynomial Delay

2016 ◽  
Vol 56 ◽  
pp. 379-402
Author(s):  
Petr Savický ◽  
Petr Kučera

A matched formula is a CNF formula whose incidence graph admits a matching which matches a distinct variable to every clause. Such a formula is always satisfiable. Matched formulas are used, for example, in the area of parametrized complexity. We prove that the problem of counting the number of the models (satisfying assignments) of a matched formula is #P-complete. On the other hand, we define a class of formulas generalizing the matched formulas and prove that for a formula in this class one can choose in polynomial time a variable suitable for splitting the tree for the search of the models of the formula. As a consequence, the models of a formula from this class, in particular of any matched formula, can be generated sequentially with a delay polynomial in the size of the input. On the other hand, we prove that this task cannot be performed efficiently for linearly satisfiable formulas, which is a generalization of matched formulas containing the class considered above.

Author(s):  
Petr Savický ◽  
Petr Kučera

A matched formula is a CNF formula whose incidence graph admits a matching which matches a distinct variable to every clause. Such a formula is always satisfiable. Matched formulas are used, for example, in the area of parameterized complexity. We prove that the problem of counting the number of the models (satisfying assignments) of a matched formula is #P-complete. On the other hand, we define a class of formulas generalizing the matched formulas and prove that for a formula in this class one can choose in polynomial time a variable suitable for splitting the tree for the search of the models of the formula. As a consequence, the models of a formula from this class, in particular of any matched formula, can be generated sequentially with a delay polynomial in the size of the input. On the other hand, we prove that this task cannot be performed efficiently for linearly satisfiable formulas, which is a generalization of matched formulas containing the class considered above.


1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.


Author(s):  
Naser T Sardari

Abstract By assuming some widely believed arithmetic conjectures, we show that the task of accepting a number that is representable as a sum of $d\geq 2$ squares subjected to given congruence conditions is NP-complete. On the other hand, we develop and implement a deterministic polynomial-time algorithm that represents a number as a sum of four squares with some restricted congruence conditions, by assuming a polynomial-time algorithm for factoring integers and Conjecture 1.1. As an application, we develop and implement a deterministic polynomial-time algorithm for navigating Lubotzky, Phillips, Sarnak (LPS) Ramanujan graphs, under the same assumptions.


2020 ◽  
Vol 20 (1&2) ◽  
pp. 65-84
Author(s):  
Xuexuan Hao ◽  
Fengrong Zhang ◽  
Yongzhuang Wei ◽  
Yong Zhou

Quantum period finding algorithms have been used to analyze symmetric cryptography. For instance, the 3-round Feistel construction and the Even-Mansour construction could be broken in polynomial time by using quantum period finding algorithms. In this paper, we firstly provide a new algorithm for finding the nonzero period of a vectorial function with O(n) quantum queries, which uses the Bernstein-Vazirani algorithm as one step of the subroutine. Afterwards, we compare our algorithm with Simon's algorithm. In some scenarios, such as the Even-Mansour construction and the function satisfying Simon's promise, etc, our algorithm is more efficient than Simon's algorithm with respect to the tradeoff between quantum memory and time. On the other hand, we combine our algorithm with Grover's algorithm for the key-recovery attack on the FX construction. Compared with the Grover-Meets-Simon algorithm proposed by Leander and May at Asiacrypt 2017, the new algorithm could save the quantum memory.


2004 ◽  
Vol 13 (03) ◽  
pp. 469-485 ◽  
Author(s):  
RAJDEEP NIYOGI

Planning with temporally extended goals has recently been the focus of much attention to researchers in the planning community. We study a class of planning goals where in addition to a main goal there exist other goals, which we call auxiliary goals, that act as constraints to the main goal. Both these type of goals can, in general, be a temporally extended goal. Linear temporal logic (LTL) is inadequate for specification of the overall goals of this type, although, for some situations, it is capable of expressing them separately. A branching-time temporal logic, like CTL, on the other hand, can be used for specifying these goals. However, we are interested in situations where an auxiliary goal has to be satisfiable within a fixed bound. We show that CTL becomes inadequate for capturing these situations. We bring out an existing logic, called min-max CTL, and show how it can effectively be used for the planning purpose. We give a logical framework for expressing the overall planning goals. We propose a sound and complete planning procedure that incorporates a model checking technology. Doing so, we can answer such planning queries as plan existence at the onset besides producing an optimal plan (if any) in polynomial time.


2016 ◽  
Vol 56 ◽  
pp. 269-327 ◽  
Author(s):  
Maximilian Fickert ◽  
Joerg Hoffmann ◽  
Marcel Steinmetz

Recent work has shown how to improve delete relaxation heuristics by computing relaxed plans, i.e., the hFF heuristic, in a compiled planning task PiC which represents a given set C of fact conjunctions explicitly. While this compilation view of such partial delete relaxation is simple and elegant, its meaning with respect to the original planning task is opaque, and the size of PiC grows exponentially in |C|. We herein provide a direct characterization, without compilation, making explicit how the approach arises from a combination of the delete-relaxation with critical-path heuristics. Designing equations characterizing a novel view on h+ on the one hand, and a generalized version hC of hm on the other hand, we show that h+(PiC) can be characterized in terms of a combined hcplus equation. This naturally generalizes the standard delete-relaxation framework: understanding that framework as a relaxation over singleton facts as atomic subgoals, one can refine the relaxation by using the conjunctions C as atomic subgoals instead. Thanks to this explicit view, we identify the precise source of complexity in hFF(PiC), namely maximization of sets of supported atomic subgoals during relaxed plan extraction, which is easy for singleton-fact subgoals but is NP-complete in the general case. Approximating that problem greedily, we obtain a polynomial-time hCFF version of hFF(PiC), superseding the PiC compilation, and superseding the modified PiCce compilation which achieves the same complexity reduction but at an information loss. Experiments on IPC benchmarks show that these theoretical advantages can translate into empirical ones.


2021 ◽  
Vol 28 (4) ◽  
Author(s):  
Victor Campos ◽  
Raul Lopes ◽  
Andrea Marino ◽  
Ana Silva

A temporal digraph ${\cal G}$ is a triple $(G, \gamma, \lambda)$ where $G$ is a digraph, $\gamma$ is a function on $V(G)$ that tells us the time stamps when a vertex is active, and $\lambda$ is a function on $E(G)$ that tells for each $uv\in E(G)$ when $u$ and $v$ are linked. Given a static digraph $G$, and a subset $R\subseteq V(G)$, a spanning branching with root $R$ is a subdigraph of $G$ that has exactly one path from $R$ to each $v\in V(G)$. In this paper, we consider the temporal version of Edmonds' classical result about the problem of finding $k$ edge-disjoint spanning branchings respectively rooted in given $R_1,\cdots,R_k$. We introduce and investigate different definitions of spanning branchings, and of edge-disjointness in the context of temporal digraphs. A branching ${\cal B}$ is vertex-spanning if the root is able to reach each vertex $v$ of $G$ at some time where $v$ is active, while it is temporal-spanning if each $v$ can be reached from the root at every time where $v$ is active. On the other hand, two branchings ${\cal B}_1$ and ${\cal B}_2$ are edge-disjoint if they do not use the same edge of $G$, and are temporal-edge-disjoint if they can use the same edge of $G$ but at different times. This lead us to four definitions of disjoint spanning branchings and we prove that, unlike the static case, only one of these can be computed in polynomial time, namely the temporal-edge-disjoint temporal-spanning branchings problem, while the other versions are $\mathsf{NP}$-complete, even under very strict assumptions. 


2021 ◽  
Vol 180 (1-2) ◽  
pp. 53-76
Author(s):  
Andreas Malcher

Insertion systems or insertion grammars are a generative formalism in which words can only be generated by starting with some axioms and by iteratively inserting strings subject to certain contexts of a fixed maximal length. It is known that languages generated by such systems are always context sensitive and that the corresponding language classes are incomparable with the regular languages. On the other hand, it is possible to generate non-semilinear languages with systems having contexts of length two. Here, we study decidability questions for insertion systems. On the one hand, it can be seen that emptiness and universality are decidable. Moreover, the fixed membership problem is solvable in deterministic polynomial time. On the other hand, the usually studied decidability questions such as, for example, finiteness, inclusion, equivalence, regularity, inclusion in a regular language, and inclusion of a regular language turn out to be undecidable. Interestingly, the latter undecidability results can be carried over to other models which are basically able to handle the mechanism of inserting strings depending on contexts. In particular, new undecidability results are obtained for pure grammars, restarting automata, clearing restarting automata, and forgetting automata.


Algorithmica ◽  
2021 ◽  
Author(s):  
Eleni C. Akrida ◽  
Argyrios Deligkas ◽  
Themistoklis Melissourgos ◽  
Paul G. Spirakis

AbstractWe study a security game over a network played between a defender and kattackers. Every attacker chooses, probabilistically, a node of the network to damage. The defender chooses, probabilistically as well, a connected induced subgraph of the network of $$\lambda $$ λ nodes to scan and clean. Each attacker wishes to maximize the probability of escaping her cleaning by the defender. On the other hand, the goal of the defender is to maximize the expected number of attackers that she catches. This game is a generalization of the model from the seminal paper of Mavronicolas et al. Mavronicolas et al. (in: International symposium on mathematical foundations of computer science, MFCS, pp 717–728, 2006). We are interested in Nash equilibria of this game, as well as in characterizing defense-optimal networks which allow for the best equilibrium defense ratio; this is the ratio of k over the expected number of attackers that the defender catches in equilibrium. We provide a characterization of the Nash equilibria of this game and defense-optimal networks. The equilibrium characterizations allow us to show that even if the attackers are centrally controlled the equilibria of the game remain the same. In addition, we give an algorithm for computing Nash equilibria. Our algorithm requires exponential time in the worst case, but it is polynomial-time for $$\lambda $$ λ constantly close to 1 or n. For the special case of tree-networks, we further refine our characterization which allows us to derive a polynomial-time algorithm for deciding whether a tree is defense-optimal and if this is the case it computes a defense-optimal Nash equilibrium. On the other hand, we prove that it is $${\mathtt {NP}}$$ NP -hard to find a best-defense strategy if the tree is not defense-optimal. We complement this negative result with a polynomial-time constant-approximation algorithm that computes solutions that are close to optimal ones for general graphs. Finally, we provide asymptotically (almost) tight bounds for the Price of Defense for any $$\lambda $$ λ ; this is the worst equilibrium defense ratio over all graphs.


Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 304
Author(s):  
Florin Manea

In this paper we propose and analyse from the computational complexity point of view several new variants of nondeterministic Turing machines. In the first such variant, a machine accepts a given input word if and only if one of its shortest possible computations on that word is accepting; on the other hand, the machine rejects the input word when all the shortest computations performed by the machine on that word are rejecting. We are able to show that the class of languages decided in polynomial time by such machines is PNP[log]. When we consider machines that decide a word according to the decision taken by the lexicographically first shortest computation, we obtain a new characterization of PNP. A series of other ways of deciding a language with respect to the shortest computations of a Turing machine are also discussed.


Sign in / Sign up

Export Citation Format

Share Document