2

From the definition of the $\Theta$-notation, $$f(n)=\Theta(g(n))\\\implies \exists n_0, \exists c_1,c_2\gt 0, \forall n\gt n_0, c_1\cdot g(n)\le f(n)\le c_2\cdot g(n)$$

We can see that the inequality is followed for all $n\gt n_0$, and hence we can say that $c_2\cdot g(n)$ is the maximum that this function can reach. Therefore, $f(n)=O(g(n))$ as well.

If we take the example of quicksort, sources say that the $\Theta$ complexity is $n\log{n}$, but $O$-complexity is given as $n^2$.

If the $O$-complexity given is true, then $f(n)\le c_2\cdot n\log{n}$ will not always be true. So in this case, how is $\Theta$-complexity different from $O$-complexity?

integrator
  • 1,110
  • 1
  • 7
  • 13

3 Answers3

6

If the $O$-complexity given is true, then $f(x)\leq c_2n\log n$ will not always be true. So in this case, how is $\Theta$-complexity different from $O$-complexity?

Yuval has covered the quicksort aspects of your question but you have a couple of fundamental misunderstandings about asymptotics.

There is no such thing as "$\Theta$-complexity" or "$O$-complexity". Asymptotic notations such as $\Theta$ and $O$ are simply ways of describing the growth rate of functions. Those functions could be used to measure anything at all (including nothing at all). The complexity is the function (or, rather, the running time is measured by the function); the asymptotic notation is just a way of describing that function's behaviour. As an analogy, suppose that I tell you that my height is "less than two metres": you wouldn't say that my "less-than height is two metres". Likewise, if I say that I weigh about 70kg, you wouldn't say that my "about weight is 70kg." "Less than" and "about" are just ways of describing the numbers that measure those physical properties.

The second misconception is that, for a given function $f$ there is some unique function $g$ such that it's correct to write $f=O(g)$. Remember that $f=O(g)$ means (in a way that's more precise than the following description) "$f$ is kinda less than $g$." To continue the analogy of the previous paragraph, it's completely true to say that my height is less than 2m and that my height is less than 102m. Indeed, there's also no contradiction in saying "My height is about 1.8m and less than 2m", which is directly analogous to "The running time is $\Theta(n\log n)$ and $O(n^2)$."

David Richerby
  • 82,470
  • 26
  • 145
  • 239
4

The worst-case running time of quicksort is $\Theta(n^2)$, and therefore quicksort always runs in $O(n^2)$, and this bound is tight (that is, best possible).

The average-case running time of quicksort is $\Theta(n\log n)$.

The best-case running time of quicksort is also $\Theta(n\log n)$, and therefore quicksort always runs in time $\Omega(n\log n)$, and this bound is tight.

I suggest taking a look at this question for more information.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
2

There is a bit of confusion. The number of comparisons when using Quicksort to sort n elements isn't a function of n, it's a function of n and the unsorted array. Now if you ask for "the largest number of comparisons for any array of n elements", "the smallest number of comparisons for any array of n elements", or "the average number of comparisons over all arrays of size n", that is a function of n.

The number of comparisons is not $\Theta$-anything because it can vary from about n log n to about n^2 / 2. It is $O(n^2)$ because it is smaller than some constant times $n^2$. The largest number of comparisons for any array of n elements is $\Theta(n^2)$. It would not be $\Theta(n^2)$ if the worst case number of comparisons varied largely with n.

The average number of comparisons over all arrays of size n is simultaneously $\Theta (n \log n)$, $O(n \log n)$, $O(n^2)$ and $O(n!)$ (Big-O allows you to use an unnecessaryly large function, while $\Theta$ doesn't).

gnasher729
  • 32,238
  • 36
  • 56