Is there a difference between the computational complexity and computational cost of an algorithm?
2 Answers
To answer your question as stated: "computational complexity" typically refers to the $\Theta$-class of a certain (often implicit) measure of computational cost.
That said, I prefer to use the terms like this:
Use "complexity" when talking about problems. For instance, you can say "sorting has $\Theta(n \log n)$ worst-case time complexity (in the comparison model)".
Use "costs" when talking about algorithms. You would say, "Mergesort has a worst-case running-time cost in $\Theta(n \log n)$ (under the RAM model)".
This is consistent with common use, that is every expert will understand what you're saying, but avoids using the term "complexity" for different things.
Rationale
Complexity theory and analysis of algorithms (AofA) are distinct fields with different goals and techniques. It's not helpful to use terminology that muddles the two together.
Side note: teaching only the complexity-theory side of things makes large parts of the AofA literature inaccessible to computer science graduate, which I think is a shame. See the work of Flajolet and Sedgewick if you're interested in these things.
"Cost", other than "complexity", is used for precise measures like, say, "the number of comparisons" often analysed in sorting. Such a cost measure is (given an algorithm and a machine model) a well-defined function on the inputs (other than "time") and can be analysed rigorously.
Every algorithm has many cost measures with different asymptotic behavious; in sorting, for instance, number of comparisons, swaps, and many more. Therefore, asking for "the complexity of the algorithm" is an oversimplification, and only meaningful under certain assumptions/conventions.
The analysis of cost measures can yield testable hypotheses, if it's more precise than Landau bounds. "Complexity" results are not testable.
"Complexity" of an algorithm can be rigorously defined in terms of cost measures, if one so desires. The other way around does not work.
For instance, an algorithm's "(time) complexity" is usually taken to mean the $\Theta$-class of dominant, additive cost measure that is defined by a function on basic operations. However, I consider this practice confusing and thus harmful (cf. item 1), and prefer to say "[cost measure] is in $\Theta(\_)$".
My approach is as following.
Computational complexity is an abstract notion having a precise mathematical definition and a field of a whole scientific research.
"Computational cost" is alternatively used for "computational complexity", though in my opinion I would not use the term "computational cost" in the formal meaning instead of "computational complexity".
The most significant difference between "complexity" and "cost" is that the "complexity" is precise mathematical measure measured using big-O notation, we usually say "space complexity" or "time complexity" where both terms "space" or "time" are abstract, i.e., by "space" we might mean RAM or HardDisk storage, while "time" might mean milliseconds, seconds, or even hours. "Cost", on the other hand, is real-life, physical, concrete measure. For example we say "it costs 2 Gigs" or "it costs 2 TB of disc space".
Let me support my point of by example. Suppose we have come up with two different algorithms solving the same problem and we have estimated exact number of operations for both algorithms, say, $n^2 + 100n$ and $0.2n^2$. We do not need to implement both algorithms in order to conclude that their complexity are equal and are $O(n^2)$. From point of view of computational complexity these algorithms have equal efficiency. However, to measure the cost of these algorithms we need to implement them, run on a computer, and measure how much time they take (in milliseconds, seconds, minutes...). This is cost and we'll obviously get different time measures for $n^2 + 100n$ and $0.2n^2$ algorithms. Thus comparing their costs we may choose a better algorithm for practical use.
- 9,905
- 2
- 26
- 36