0

This answer seems incorrect to me:

Which NP-Complete problem has the fastest known algorithm?

The fastest solution for one NP-Complete problem should be the fastest solution for all NP-Complete problems.

Why?

Can any NP-complete problem be reduced to any other NP-Complete problem in polynomial time?

So, finding some super-optimal exponential time solution, $O(1.27k + nk)$, should be able to be applied to all NP-Complete problems. In other words, the most optimal solution is the same for all NP-Complete problems.

3 Answers3

5

A polynomial-time reduction only needs to preserve the size of the problem up to a polynomial upper bound (which is implied by the time constraint).

As an example, suppose you have two NP-complete problems and $A \le_p B$, the reduction blows up the instance size from $n$ to $n^2$, and $B$ admits an algorithm with running time $ 2^\sqrt{n}$. This only induces an algorithm for $A$ with a running time of $2^{\sqrt{n^2}}=2^n$.

Steven
  • 29,724
  • 2
  • 29
  • 49
2

Here is a way to see that there could be arbitrarily large differences between the optimal time complexities of different NP-complete problems.

Suppose P = NP. Then any problem in P with at least one accepting input and at least one rejecting input is NP-complete. By the time hierarchy theorem, there are problems that take $\Theta(n)$, $\Theta(n^2)$, $\Theta(n^3)$, ... time to solve and cannot be solved significantly faster. So if P = NP, then there are NP-complete problems that have asymptotic time complexities of all polynomial exponents (ignoring some log factors in the time hierarchy theorem).

Aaron Rotenberg
  • 3,583
  • 14
  • 20
1

Let's say the NP complete problem A can be solved in O(f(n)), where n is the problem size. And B can be solved in O(g(n)), where g is much larger than f. If we reduce an instance of B to an instance of A, the problem size is very unlikely to stay unchanged. Lets say the problem size is changed from n to h(n), and the time used for the reduction is negligible.

Then solving he instance of B by reducing it to an instance of A will take O (f(h(n))), which may very well be a lot larger than O(g(n)). For example, if both A and B can be solved in O(c^n) for different fixed c > 1, and the reduction changes the instance size from n to n^2, then the reduction will result in a much slower solution.

So your assumption is completely wrong. 1.27^3 > 2, so if you have a problem that can be solved in O(2^n), and you use a reduction to your O(1.27^n) problem that increases the instance size just by a factor 3, the reduction leads to a slower algorithm.

gnasher729
  • 32,238
  • 36
  • 56