6

I'm trying to come up with a data structure that could access, insert, and delete any element in constant time. I know that's pretty difficult, but I'm just doing it to invoke thought and understanding about computer science. However, I'm starting to question whether it is even possible.

My theory is: For access to be in constant time, the data would need to be stored in a static location (like an array). And for insertion/deletion to be in constant time, the data would need to be stored dynamically using pointers or some sort of lookup table (like a linked list). Therefore, no data structure can have all three operations run in constant time.

Is this reasoning correct?

EDIT:

access(index): returns element at given index

insert(element, index): Inserts element into 'index' and shifts everything after 'index' right one index

delete(index): Removes element at 'index' and corrects indexes so that there are no gaps (i.e. shifts everything after 'index' left one index)

Badr B
  • 207
  • 2
  • 10

3 Answers3

2

How do we get lower bounds?

As D.W. noted, lower bounds are hard. But that doesn't mean that no progress can be made. To get a lower bound, it is required (well, not strictly, but you won't get very far without one) that you make a (possible restrictive) model of all algorithms or data-structures that can solve your problem.

For example, the $\Omega(n\log n)$ lower bound for sorting holds only for the restrictive 'comparison based sorting' algorithms. (this is why counting sort can 'beat' this time in some cases: it isn't comparison based)

Lower bounds for data structures

For the case of data-structures, the cell-probe model seems a good place to start. This model is similar to the more common RAM-model, but can be useful for lower bounds as it 'only counts' the number of accesses to stored data (i.e. the number of 'probes' to a cell).

For example, in the paper by Yao that introduces this model, lower bounds are calculated for the data structure supporting INSERT, DELETE and MEMBER (test for membership) queries. I think that a similar technique could work for your data-structure as well.

Do note that the paper by Yao is a bit outdated. I mostly mentioned it because the problem it considered is relatively simple and similar to yours. More modern techniques are covered in the dissertation of Kasper Green Larsen.

Discrete lizard
  • 8,392
  • 3
  • 25
  • 53
0

I don't find the reasoning convincing. Perhaps there is some other way to do it you haven't imagined yet. Proving lower bounds is hard.

That said, I doubt that it's possible to support all three operations in $O(1)$ time, though I don't have a proof.

D.W.
  • 167,959
  • 22
  • 232
  • 500
0

It have been prooven, that what you expect is not possible, the lower bound is somewhere like $\Omega{\left( \frac{\log n}{\log \log n} \right)} $ (see here, like TillmanZ already pointed out).

However, hashing gives very fast lookup (somewhere like $\mathcal{O} (1) $ in general). The so-called Cuckoo-hashing guarantees constant lookup and expected constant insert time (amortized). Deletion normally is not considered but it can be done really simply by lookup and clearing the value, resulting also in $\mathcal{O} (1) $.