1

I want to know the time complexity of specifically calculating ${n \choose k}$ where it is defined as

$$ {n \choose k} = \frac{n!}{k!(n-k)!}. $$

If the factorial function is recursive $O(n)$:

def factorial(n):
    if n == 0:
        return 1
    else:
        return n * factorial(n-1)

then the time complexity would be $O(n + k + (n-k)) = O(2n)$? This seems counterintuitive as there are 2 inputs to $n \choose k$ but it seems like $O(2n)$ only considers one input yet $O(2n)$ has more definite proof to me as I could list out $O(n + k + (n-k)) = O(2n)$. I get that $O(2n)\propto O(n)$, that is not what I am looking for.

VJZ
  • 161
  • 6

1 Answers1

2

I'll assume $k<n/2$, for simplicity (otherwise replace $k$ by $n-k$).

One way to calculate it is via

$${n \choose k} = {n (n-1)(n-2) \cdots (n-k+1) \over k!}.$$

This can be calculated using $2k$ multiplications. So, if we counted each multiplication/division/addition as $O(1)$ time, this would be $O(k)$ time.

However, that is misleading. The size of the numbers grows dramatically. So, if you want to compute this exactly (as a rational number), we need to operate on very large numbers, which takes more than $O(1)$ time. In particular, the numbers can grow as large as $n \lg k$ bits long, so each multiplication or division might take $O((n \lg k)^2)$ time [*]. So, the running time might something like $O((n \lg k)^2 k)$ bit operations. If you are a bit cleverer about the order in which you do the multiplications and divisions (multiplying small numbers first, using a binary tree structure to minimize the number of large numbers you have to deal with) you can get this down to something like $O(k^2 \log n)$ bit operations.


Footnote *: I am ignoring sub-quadratic multiplication algorithms. There are algorithms that are asymptotically faster, for very large numbers, but they tend to be only useful when the numbers are super-large, so for simplicity of analysis, I'm ignoring them.

D.W.
  • 167,959
  • 22
  • 232
  • 500