2

I was thinking in the following approach for solving a problem that is believe to be a NP-hard problems today in polynomial time, assuming the following:

  • There exists a believed-today NP-hard problem called $X$, where its whole input space can be divided into a finite set of groups such that there exists a corresponding "single-threaded" polynomial-time algorithm for each group (that only runs polynomically for the inputs corresponding to its designated group, without any complexity guarantees regarding inputs for other groups).

Since we don't neccesarily have a criteria to know, polynomically, to which group a certain input belongs to, a possible approach to solve such problem $X$ polynomically is to run all algorithms in parallel and wait for the first that finishes, aborting the execution of everyone else.

Since we have at least one polynomial-time algorithm for every possible input of $X$ (because of the conditions given above), and we are executing all algorithms at once, such problem will be solved in polynomial time for every input, and so every believed-today NP-hard problem can be solved polynomically previous transformation to an instance of $X$.

Is there any research in this direction?

Discrete lizard
  • 8,392
  • 3
  • 25
  • 53
ABu
  • 529
  • 2
  • 11

2 Answers2

3

$X$ doesn't exist (unless $P = NP$ in general) because your description is self-contradictory. You claim it is NP-hard, yet its input domain can be split in a finite number of disjoint poly-time solvable problems. This makes it $P$.

The idea of executing these algorithms in parallel adds nothing. Even on a single machine you can run N programs 'simultaneously' by distributing computational time in a round-robin fashion, leading to an overhead factor of N to any single program. Set N to be equal to your number of algorithms (which was, by your description, a finite constant), and you end up with a poly-time algorithm.

orlp
  • 13,988
  • 1
  • 26
  • 41
1

No, I'm not aware of any research in this direction, probably because it is not very promising. This does not seem to me like a promising approach to solve NP-hard problems efficiently, or other hard problems efficiently. The hard part would be how you would find that decomposition; I don't know of any reason to think that's particularly easier than solving the entire problem.

If the problem had the structure you described, with only polynomially many groups, then the problem could be solved in polynomial time. Anything you can do in parallel, you can do with a single thread by interleaving execution. After all, many CPUs are single-threaded, yet can execute many processes (seemingly in parallel, though actually interleaved).

You talk about "believed to be NP-hard", but most of the NP-hard problems we encountered are not just "believed" to be NP-hard; they are known to be NP-hard, and we can prove it.

My answer does not change appreciably if you replace "believed to be NP-hard" with "believed to be hard".

D.W.
  • 167,959
  • 22
  • 232
  • 500