From what I can understand, Deolalikar’s main innovation seems to be to use some concepts from statistical physics and finite model theory and tie them to the . It was my understanding that Terence Tao felt that there was no hope of recovery: “To give a (somewhat artificial) analogy: as I see it now, the paper is like a. Deolalikar has constructed a vocabulary V which apparently obeys the following properties: Satisfiability of a k-CNF formula can.
|Country:||United Arab Emirates|
|Published (Last):||23 December 2006|
|PDF File Size:||20.43 Mb|
|ePub File Size:||15.14 Mb|
|Price:||Free* [*Free Regsitration Required]|
By arranging the search tree with the solution leaves clustered together, a deterministic depth-first search can find the first solution in logarithmic time, and list the rest in near-constant time per solution. And then your prospective algorithm will ask me: I think the fault was that they claimed it worked like the other estimates in a previous section, which was not true. Based on the definition alone it is not obvious that NP -complete problems exist; however, a trivial and contrived NP -complete problem can be formulated as follows: Important complexity classes more.
NP -hard problems are those at least as hard as NP problems, i. On a rough reading one is perhaps deceived into thinking that this is OK, but it is not — not the least because he never defined his number of parameters.
The challenge is to extract out the following from this information: Take the case of clustering in 2SAT, for instance. One can just use oblivious machines, at the cost of a multiplicative logT factor, which is OK for polynomial T.
The parameters correspond to cliques or potentials that one can define arbitrarily.
Fatal Flaws in Deolalikar’s Proof?
This shows that A is not all of P, and therefore his generalization step in 1 going from simple structure for A to simple structure for all problems in P fails. I hope we helped, I hope we were always positive, and I hope the work here in trying to resolve this exciting event has been a positive contribution. Is the proof correct? deolalikqr
Perhaps he was hoping something of the kind. Over the last few years, several research areas have witnessed important progress through the unexpected collaboration of statistical physicists, computer scientists, and information theorists. The kind of property suggested in the paper as a candidate would be related to the statistical physics landscape of the solution space, which is intriguing to most complexity theorists: But that definitely does not imply that each stage of the induction is order independent.
I haven’t seen anything much since posts in August. Cannot agree with you more on the LFP front though. Today I wish to talk about something other than the P NP proof. Post as a guest Name.
This question can obviously be raised in a deoalikar wide range of complexity contexts, and surely? Just pick up the phone and call him. Solutions to the original problem when we project have various sizes of preimage sets in projection and hence you cannot perform this trick to get a uniform distribution.
Consequently, polynomial time algorithms cannot solve problems in regimes where blocks whose order is the same as the underlying problem instance require simultaneous resolution. Is it in P? The question is not weather this approach is promising. Does he refer to something which is known, or did he left the point undefined as I suspect he deolaliikar
The other problem is that you restrict your attention to monadic fixed points. The fact that the all-zeroes assignment happens to be included in the distribution looks totally irrelevant to the complexity of specifying the distribution. First, I pick a random k-SAT formula.
Someone stranded atop a mountain is in mortal risk and it is ethically imperative for rescuers to make an effort to reach him. Hanf-Gaifman locality also does deoalikar apply if we work with k-ary LFP directly without the reduction to the monadic case.
If so, what sort of impact? Dear Vinay Deolalikar, Thank you very much for sharing your paper with me. T is less than poly nand can be encoded by O log bits. From these Claim 1 would follow from the Immerman-Vardi Theorem we need successor or linear order of course. However, this deolalikwr only true in the original LFP.
Of course I expect that carefully programming it would reveal errors.
P versus NP problem – Wikipedia
Here the answer is that if we work with k-ary LFP directly then the Gaifman deolallikar changes after each iteration of the LFP-computation because we need to take care of the new tuples that are added to the k-ary inductively defined relation.
But they do appear to give an obstacle to the general proof method. My intuition works the following way: The key question is how does he go from FO LFP and k-sat to these graphical models and compare them in terms of size of cliques. However, the best known quantum algorithm for this problem, Shor’s algorithmdoes run in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.