scholarly journals Thresholds and Expectation-Thresholds of Monotone Properties with Small Minterms

10.37236/4769 ◽  
2015 ◽  
Vol 22 (2) ◽  
Author(s):  
Ehud Friedgut ◽  
Jeff Kahn ◽  
Clara Shikhelman

Let $N$ be a finite set, let $p \in (0,1)$, and let $N_p$ denote a random binomial subset of $N$ where every element of $N$ is taken to belong to the subset independently with probability $p$ . This defines a product measure $\mu_p$ on the power set of $N$, where $\mu_p(\cal{A}) := Pr[N_p \in \cal{A}]$ for $\cal{A} \subseteq 2^N$.In this paper we study monotone (upward-closed) families $\cal{A}$ for which all minimal sets in $cal{A}$ have size at most $k$, for some positive integer $k$. We prove that for such a family $\mu_p(\cal{A}) / p^k $ is a decreasing function, which implies a uniform bound on the coarseness of the thresholds of such families. We also prove a structure theorem which enables one to identify in $\cal{A}$ either a substantial subfamily $\cal{A}_0$ for which the first moment method gives a good approximation of its measure, or a subfamily which can be well approximated by a family with all minimal sets of size strictly smaller than $k$.Finally, we relate the (fractional) expectation threshold and the probability threshold of such a family, using linear programming duality. This is related to the threshold conjecture of Kahn and Kalai.

Author(s):  
LEV V. UTKIN ◽  
NATALIA V. SIMANOVA

An extension of the DS/AHP method is proposed in the paper. It takes into account the fact that the multi-criteria decision problem might have several levels of criteria. Moreover, it is assumed that expert judgments concerning the criteria are imprecise and incomplete. The proposed extension also uses groups of experts or decision makers for comparing decision alternatives and criteria. However, it does not require assigning favorability values for groups of decision alternatives and criteria. The computation procedure for processing and aggregating the incomplete information about criteria and decision alternatives is reduced to solving a finite set of linear programming problems. Numerical examples explain in detail and illustrate the proposed approach.


2013 ◽  
Vol 24 (06) ◽  
pp. 899-912 ◽  
Author(s):  
GUANGYAN ZHOU ◽  
ZONGSHENG GAO

The random (2 + p)-SAT model has been proposed [18] to study the possible relation between the “order” of phase transitions and computational complexity. It was also claimed that there exists pc > 0, such that for p < pc the random (2 + p)-SAT instance behaves like 2-SAT. Later, Achlioptas et al. [3] obtained the first rigorous results that 0.4 ≤ pc ≤ 0.695, the methods they use are the first moment method and the simple Unit-Clause algorithm. In this paper, we try to optimize the local maximality condition of the truth assignments when implementing the first moment method. We prove that the phase transition point of clauses-to-variables ratio r (dependent on p) can be improved. Moreover, we show that the upper bound of pc can be reduced to 0.6846. This fact implies that, for a constant λ < 1, a random (2 + p)-SAT formula with λn 2-clauses and 2.17n 3-clauses is almost surely unsatisfiable.


Networks ◽  
2008 ◽  
Vol 52 (4) ◽  
pp. 299-306 ◽  
Author(s):  
Charles J. Colbourn ◽  
Gaetano Quattrocchi ◽  
Violet R. Syrotiuk

Author(s):  
Hao Qiu ◽  
Gaoming Huang ◽  
Jun Gao

Tracking multiple objects with multiple sensors is widely recognized to be much more complex than the single-sensor scenario. This contribution proposes a computationally tractable multi-sensor multi-target tracker. Based on Bayes equation and multi-senor observation model, a new corrector for multi-senor is derived. To lower the complexity of update operation, a parallel track-to-measurement association strategy is applied to the corrector. Hypotheses truncation scheme along with first-moment approximation of multi-target density are also employed to improve the tracking efficiency. The tracker is applied to a couple-sensor scenario. Experiment results validate the advantages of proposed method compared to the standard single-sensor δ-generalized labeled multi-Bernoulli filter and the iterated-corrector probability hypothesis density filter.


Geophysics ◽  
1977 ◽  
Vol 42 (6) ◽  
pp. 1215-1229 ◽  
Author(s):  
Claude Safon ◽  
Guy Vasseur ◽  
Michel Cuer

An approach is presented for solving the inverse gravity problem in the presence of various constraints such as bounds on density. This approach takes into account the nonuniqueness of the solution: for a finite set of measurements, the region studied is divided into a great number of rectangular prisms of unknown density. The set of all solutions of this undetermined problem may be described through various convex diagrams of moments; plots of these moments give bounds on some physical parameters such as the partial and total mass or the position of the center of mass. Numerical solutions are obtained using linear programming algorithms. Also, particular solutions such as the so‐called ideal body may readily be obtained using this technique. Only two‐dimensional cylindrical structures are considered, but application of this technique to three‐dimensional bodies is straight‐forward.


Sign in / Sign up

Export Citation Format

Share Document