3

In the standard proof why Subset-Sum is (weakly) NP-complete, one reduces Vertex Cover to Subset-Sum by using suitable numbers with O(m+n) bits (where m is the number of edges and n the number of vertices). But how can we talk about a polynomial time reduction if we generate exponential-size numbers?

I guess that this is the key why Vertex Cover is strongly NP-complete and Subset-Sum is only weakly NP-complete. But I didn't get why it is in fact a polynomial time reduction.

user1742364
  • 642
  • 4
  • 12

1 Answers1

8

The key is indeed that the numbers are expressed in binary. As you observe, the numbers could have a value that is exponential in the size of the vertex cover instance. If the reduction "wrote them out" in unary when computing the Subset-Sum instance, this would take exponential time. However in binary, writing down the bits of the number (i.e. computing their representation in the Subset-Sum instance) only takes polynomial time.

Just to emphasise the distinction, it only takes 7 digits to write out 1,000,000 in base ten, so writing it down takes about 7 steps. If we did it in unary, it would take about a million steps. The same thing is happening in the reduction, if it were using unary, it would not be polynomial in the size of the graph, but in binary - because what we're measuring is the number of steps taken to write it down - it is polynomial.

Luke Mathieson
  • 18,373
  • 4
  • 60
  • 87