3

From my understanding, the complexity of the algorithm is O(number of inputs * number of bits for input). The number of bits in binary notation is obviously less than the number of bits in unary notation. Then why is it that for binary, complexity is exponential but for unary, it is polynomial?

EDIT: I think the binary time is still faster. It just appears to be faster in case of unary due to notations. If x bits are used, then n has to be less than equal to x. So it becomes polynomial. while in case of binary, it's obviously O(n*2^number of bits). There might be something wrong in my explanation. Feel free add an answer to correct it.

Raphael
  • 73,212
  • 30
  • 182
  • 400
aste123
  • 445
  • 2
  • 9
  • 18

1 Answers1

12

Consider the following (silly) function cow, which accepts a number $n$:

def cow(n):
   k = 0
   for i in range(0, n):
       k =  k + 1
   return k

The loop takes $n$ steps, so the time needed for the function to complete on input $n$ is $\Theta(n)$.

However, the size of the input is not $n$ but rather $\log_2 n$ because computers represent numbers in binary. That is, if $b = \log_2 n$ is the size of the input, then the complexity is $\Theta(2^b)$ (because $n = 2^{\log_2 n}$), which is an exponential algorithm when measured in size of the input $b$. If we measure in terms of $n$ then it is a linear algorithm.

If we represented numbers in unary then we would have $b = n$, i.e., it would take $n$ "units" (not "bits" anymore) to represent $n$. In this case the complexity would be $\Theta(b)$, a linear algorithm.

Andrej Bauer
  • 31,657
  • 1
  • 75
  • 121