14

If I understand it correctly, an algorithm that computes the value of a real function $f$ has computational complexity $O(g(n))$ if the following holds: When we compute $f$ to precision $\delta$ requires on the order of $g(n)$ steps.

However, what if we have an algorithm that first "finds a more efficient algorithm to compute $f$", and then computes $f$?

In other words, what if we have an algorithm $A$ that does the following:

  1. Find an efficient algorithm $B$ for computing $f$.

  2. use $B$ to compute $f$.

In that case, we can no longer speak of the computational time it would take to compute $f(5)$ for example, because it fully depends on whether Algorithm $A$ has already found algorithm $B$. In other words, the computing time required to compute $f(5)$ if $5$ is the first comoputed number is far greater than the computational time required to compute $f(5)$ after $f(3)$ is already computed.

My question is, is there a concept/theory about this kind of algorithm that first finds another algorithm before computing a function? Specifically I am wondering about analysis of the computational complexity of such algorithms.

user56834
  • 4,244
  • 5
  • 21
  • 35

2 Answers2

18

There is a well-known algorithm, Levin's universal search algorithm, whose mode of operation is identical. Consider for example the problem of finding a satisfying assignment for a formula which is guaranteed to be satisfiable. Levin's universal search runs all potential algorithms in parallel, and if any algorithm outputs a satisfying assignment, stops and outputs this assignment. If the optimal algorithm for the problem runs in time $f(n)$, then Levin's algorithm runs in time $O(f(n))$ (with a possibly huge constant) if implemented correctly.

While Levin's algorithm is impractical (due to the huge constants involved), it is very interesting theoretically. See the Scholarpedia article for more on universal search.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
10

Suppose we have a function f which takes an argument x of type A, and outputs another function which takes an argument y of type B and returns a result of type C. In your words, f takes an argument x and returns an "algorithm" which takes inputs of type B and outputs results of type C.

The function f has the type

A → (B → C)

Indeed, it takes x : A and returns a function of type B → C. But such an f is equivalent to a function g : A × B → C which takes both x and y at once and gives you the final result. Indeed, there is an isomorphism between the types

A → (B → C)

and

A × B → C

because we can define g in terms of f as

g(x, y) := f(x)(y)

and we can define f in terms of g as

f(x) := (y ↦ g(x,y))

The operation of passing from g to f is called currying and functional programmers use it all the time. In computability theory the idea of taking one input and outputing a function (algorithm) is emboddied in the s-m-n theorem.

The answer to your question is "yes, people do this all the time". But there is also a moral: an algorithm which finds an algorithm is still just an algorithm.

Andrej Bauer
  • 31,657
  • 1
  • 75
  • 121