5

It seems that (in a broad sense) two approaches can be utilized to produce an algorithm for solving various optimization problems:

  1. Start with a feasible solution and expand search until constraints are tight and solution is maximal (or minimal).
  2. Begin with violated constraints and search for maximal (or minimal) feasible approach.

For the Max-Flow problem Ford-Fulkerson satisfies condition (1), while Push-Relabel satisfies condition (2). An interesting point is that Push-Relabel is a more efficient algorithm than Ford-Fulkerson. My question is this:

What other examples are there where (2)-based approaches outperform their (1)-based counterparts?

A follow up is:

Do there exist meta-theorems regarding approaches based on condition (2)?

Nicholas Mancuso
  • 3,927
  • 1
  • 25
  • 39

2 Answers2

4

Another example is interior-point algorithms for convex optimizations, which (if I remember correctly) start at an arbitrary point and try to simultaneously get into the feasible region and optimize the objective function. They are provable and (to some extent) practically faster than the simplex algorithm (and related algorithms like the criss-cross algorithm), which does a local search inside the feasible polytope.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
3

As encouraged in the comments, I'm giving a more general answer regarding constraint violation or constraint propagation in search in the context of CSP solvers. In a sense, the theory of search in CSP/SAT majorizes other approaches. My answer deals with complete search methods, although hybrids exist (combine local search with inference).

A constraint satisfaction problem (CSP) as a triple $\langle X,D,C \rangle$, where $X$ is a set of variable, $D$ a set of domains (one for each variable), and $C$ is a set of constraints that limit or specify allowable combinations of values. Solving CSP problems is obviously hard (SAT is a special case). Also, the obvious way of solving a problem is search. Search can be replaced or at least combined with a specific kind of inference called constraint propagation. For example, consider modeling a Sudoku puzzle as a CSP. After a variable has been assigned to some value (a number placed into a cell), we an call an algorithm, that propagates the consequences of that variable assignment. If we notice this leads to a non-solution, we undo that variable assignment, and try another one. Many different algorithms for propagation exist, such as Mackworth's AC-3. As is with the type of inference used, the variable and value selection heuristics all play a major role in the efficiency of the solver.

A key aspect that was noticed early on was that effective heuristics were "fail-first heuristics". Whatever we choose (what variable to assign next, what value to assign to it), it should be most likely to fail soon, thereby pruning the search tree.

A popular heuristic for choosing a variable is the minimum-remaining values (MRV) heuristic: pick the variable with the fewest legal values. This is a "fail-first heuristic" that was already proposed in 1965 by Golomb and Baumert. Much effort has since gone into understanding and enhancing this already powerful heuristic. It was noted that MRV only exploits information about the current state of the problem. The heuristic can be made adaptive by exploiting the information about the previous states as well. The so called dom/wdeg heuristic proposed in 2004 is still widely considered to be the most effective general approach known. The idea is as follows:

  1. Associate each constraint with a weight, initially set to 1
  2. Everytime a constraint is responsible for a dead-end, its weight is incremented
  3. When choosing a variable, first choose the one with fewest legal values
  4. In case there are multiple choices, broke ties by dividing with the weighted degree, which is the sum of weights of the constraints involving the variable in question, and at least one other unassigned variable

What good does this do? Well, the weight on the hard constraints increases as the search progresses. In other words, the search is being guided towards hard parts of the problem. Once again, obviously one is more likely to fail when faced with a hard problem, and here we again see the "fail-first principle" in work.

By the way, the dom/wdeg heuristic has been observed to have a nice synergy with specific random restart strategies in 2008 by Grimes [1]. Grimes observed the so-called geometric strategy was the most suited one, making the dom/wdeg even more powerful.


[1] Grimes, D. (2008). A study of adaptive restarting strategies for solving constraint satisfaction problems. In Proc. 19th Irish Conference on Artificial Intelligence and Cognitive Science-AICS (Vol. 8, pp. 33-42).

Juho
  • 22,905
  • 7
  • 63
  • 117