If you use min() or max() on a constant sized list, even in a loop, is the time complexity O(1)?
12 Answers
That depends what exactly you mean by "constant sized". The time to find the minimum of a list with 917,340 elements is $O(1)$ with a very large constant factor. The time to find the minimum of various lists of different constant sizes is $O(n)$ and likely $\Theta(n)$ where $n$ is the size of each list. Finding the minimum of a list of 917,340 elements takes much longer than finding the minimum of a list of 3 elements.
- 6,277
- 1
- 12
- 23
- 32,238
- 36
- 56
I found this quote from the Wikipedia article on time complexity helpful:
The time complexity is generally expressed as a function of the size of the input.
So if the size of the input doesn't vary, for example if every list is of 256 integers, the time complexity will also not vary and the time complexity is therefore O(1). This would be true of any algorithm, such as sorting, searching, etc.
- 251
- 2
- 6
Sure, you could call it O(1) if you want.
So what if you choose to describe it that way? It's still going to iterate over the whole list, so describing it one way or the other doesn't change the real-world Python run time.
It's still something you want to avoid doing repeatedly for the same list, especially if the list isn't tiny. (Especially in a loop that can run many iterations.)
Being O(1) isn't the same thing as cheap, especially if you're "cheating" by taking large but fixed sizes as "constants" instead of part of your complexity calculation.
If anything, you've created an example of why complexity-class analysis is not the same thing as performance estimation, especially for a given finite range of problem sizes. If it feels "wrong" to call it O(1), this is why.
- 1,105
- 8
- 16
Yes
In general, if the time complexity of an algorithm is O(f(X)) where X is a characteristic of the input (such as list size), then if that characteristic is bounded by a constant C, the time complexity will be O(f(C)) = O(1).
This is especially useful with certain algorithms that have time complexities that look like e.g. O(N ^ K). This is superexponential, which isn't great, but if you can fix K to be bounded by a small constant, your runtime is polynomial, which is much better! (The downside is that your algorithm will have to refuse certain inputs, but hey those were going to take forever anyway.)
This here is the big limitation of complexity analysis: it can only tell you how fast certain functions grow, it will not tell you the absolute time taken. An algorithm that will always take a million years no matter the input is still O(1), after all.
To figure out the specific numbers, and if you'll need to cache the results from min and max, you'll need to measure the actual time your code takes, and decide if it is fast enough for your use case.
- 161
- 5
Usually, $\operatorname{O}\left(1\right) .$
Yeah, usually it'll be $\operatorname{O}\left(1\right) ,$ under normal assumptions.
A major exception would be if the list-elements could grow in complexity with input-size. For example, if the list-elements were themselves lists that grew with the input, then even a single comparison between two of the elements could be more than $\operatorname{O}\left(1\right) .$
But if the list-elements are, say, machine-primitives that have constant-time behavior, then, yeah, typically $\operatorname{O}\left(1\right) .$
- 1,351
- 1
- 10
- 18
If the size of the lists as input to the min() and max() functions always are the same constant $k$, then you can find an upper bound (and lower bound) on the number of operations required to compute min() and max() that depends only on the constant size $k$ (plus some other constants), which results in a running time of $O(1)$.
Therefore, if the size of the lists always are the same, then the asymptotic time complexity will be $O(1)$. However, do keep in mind that the actual running time still depends on the size of the lists.
- 73
- 8
Yes, any bounded amount of work is O(1).
Every algorithm applied to an input of bounded size takes O(1) time, however, and that is independent of any variables whatsoever, so there is never any reason to mention this as an attribute of any particular algorithm.
In the analysis of algorithms, though, you will often see O(1) used to refer to any bounded amount of work. The recurrence relation for the time taken by binary search, for example, is T(n) = T(n/2) + O(1).
- 514
- 3
- 8
Colloquially we refer to algorithms as $O(1)$ or $O(n\ \log n)$ or such. When you start talking about the Big-O complexity class of an operation over a constant sized object, it becomes important to use it more precisely. Us CS people get too lazy!
Big-O notation is applied to mathematical functions not computer algorithms. It explores the bounds on an operation as the independent variables in that function head off towards infinity. We cannot apply it to an algorithm without using extra words. For example, we might define a function $f(n)$ to be "the maximum number of comparisons needed to sort a list of length $n$." This describes a mathematical function, and thus we can plot it, and we can look at its behavior using Big-O complexity analyses.
So in this case, you have an algorithm "determine the maximum value in a list of a given size." But you still need to define your function on that algorithm. I can't say "determine the maximum number of values a list of some pre-specified size as the size of that list grows to infinity" any more than I can say "two plus two equals five for very large values of two."
I could pick some arbitrary unimportant variable, and say $f(n)$ is "the number operations does it take to find the maximum of a fixed-size length given that the price of tea in India is $n$". That would be a valid function we could apply Big-O notation to, and we would find that your algorithm's run time is $O(1)$ with respect to the price of tea in India. But we need something as an independent variable that can take on values that approach infinity.
To use Big-O meaningfully, you need to identify some function of a variable (or variables) which can approach infinity. Only then is a Big-O analysis meaningful.
- 3,522
- 14
- 16
My understanding of the notion of time complexity is that it describes the change in the amount of time that an operation takes as the size of a single input to the function changes. The quote that @JoshRumbut gives says pretty much the same thing.
If, as in the case here, one is not talking about how the execution time of an operation changes as the input size changes, then the notion of time complexity does not apply. The time complexity for min() or max() operations run only on a list of a particular size is not O(?) anything The concept simply doesn't apply here.
What does make sense is to talk about the time complexity of min() and max() as the size of the input list changes. Since the execution time varies linearly with the size of the input list (either a linked list or an array), the time complexity of min() or max() on a simple list is (N).
- 107
- 2
Not really. With a constant size, such as 1,000,000, the time complexity of the min() and max() aren't O(1).
Consider a nested list whose every element is an m sized list, which then elments are:
[0] * n + [k]
where k is a number. The time complexity of comparating two elements is O(m), so the complexity of min() and max() is O(m).
- 111
- 1
The time complexity of the algorithm is still O(N)
Sure, in a sense the max and min of a constant list is a constant, but no general implementation can know these values without running first.
The time complexity of the call to the min and max function MAYBE O(1)
A compiler can recognize that the result of the call is a compile-time constant if the argument is a constant, then precompute these values and place the result directly in the machine code. I doubt that the python interpreter can do this optimization, but I do not know for sure whether it might perform some just-in-time compilation tricks or cache the results. In the end, it is out of your hand.
EDIT: I was assuming that constant sized implies that OP does not change the values of the entries at any point. I might need to rephrase this. However, this is my point: If the list ist just constant in size, but not constant in value, I'm allowed to change the last N-1 entry to a value larger than the previous max value. So then the result of max() must change. How can a general algorithm work this out without touching all elements up to the last one?
If the question is intended to infer the complexity of an algorithm that has M values as input, and is using max and min, then it depends on whether size N is dependent on M. That is not given in the question, thus no conclusive answer can be given.
- 109
- 1
It depends if Array/List is sorted or not. The given answer is incorrect and my one was downvoted. Thanks voting bots!
https://stackoverflow.com/questions/35386546/big-o-of-min-and-max-in-python
FROM WIKIPEDIA:
O(1) is applicable
An algorithm is said to be constant time O(1)... In a similar manner, finding the minimal value in an array sorted in ascending order
O(n) is applicable
O(n).. finding the minimal value in an unordered array is not a constant time operation as scanning over each element in the array is needed in order to determine the minimal value
You need to check all of the the values to find the minimal one if the list is not sorted. My first idea is that Python does not have a hidden support for caching function return values. I'm not certain if min(value) is considered deterministic in python if you supply a reference instead of an actual list.
Below is an easy way to memoize a function and its return values in Python. If you apply this the complexity of your function can be reduced to O(1)+n
def memoize(func):
cache = dict()
def memoized_func(args):
if args in cache:
return cache[args]
result = func(args)
cache[args] = result
return result
return memoized_func
- 93
- 3