5

I'm searching for an algorithm to accomplish a (hopefully) simple task.

If I have a set of vectors, e.g., $\left( \begin{bmatrix} 0\\ 2\end{bmatrix}, \begin{bmatrix} 1\\ 1 \end{bmatrix}, \begin{bmatrix} 2 \\ 0 \end{bmatrix}, \begin{bmatrix} 1\\-1\end{bmatrix} \right)$, and an arbitrary coordinate $\begin{bmatrix} x\\ y \end{bmatrix}$, I want to find the smallest integral linear combination of these vectors that will sum to the point.

I can interpret the vectors as a set of linear equations:

$$\begin{aligned} b + 2c + d &= x \\ 2a + b - d &= y \end{aligned} $$

and look for the solution such that $|a| + |b| + |c| + |d|$ is minimized, or determine that there is no solution (for $x=2,y=1$ there isn't one, in this example).

My problem is that I often end up doing this by drawing out a grid and eyeballing the solution, which isn't very mathematical. My hope is to find an algorithm that will allow me to solve this problem with any set of vectors and any target coordinate, or determine that the solution does not exist.

I don't have a lot of formal education in math, so searching for help on this is tricky for me. I don't have the vocabulary I need to correctly describe this to Google or Wikipedia. I thought it would be worthwhile to ask here and see what I need to learn to solve this problem.

jen-rose
  • 361

4 Answers4

2

Your problem is at least as hard as a problem called integer linear programming, because if all of your coefficients had to be non-negative, then you would want to minimize $a + b + c + d$ subject to your two equality constraints and also subject to each variable being non-negative. You can look up integer linear programming if you're interested. The point is that integer linear programming is what's called "NP-hard", which means that it is almost certain that any algorithm to solve it will have to take exponential time. So there is no simple fast way to solve your problem in the general case where you can have an arbitrary number of vectors and the entries in the vectors can be arbitrarily large magnitude.

user2566092
  • 26,450
2

Even though your problem isn't directly an integer linear programming problem, there is a way to turn it into one, so you can use an integer linear programming problem solver directly to solve your problem. Let $v_1,v_2,\ldots,v_n$ be your basis vectors and let $v_{ij}$ denote the $j$th coordinate in $v_i$. Let $a_1,a_2,\ldots,a_n$ be the integer coefficients you want to find to make a linear combination of the basis vectors that equals the target vector. Let $(x,y)$ be the target vector. Introduce new integer variables $b_1, b_2, \ldots, b_n$. Then your integer linear programming problem is to minimize $b_1 + b_2 + \cdots + b_n$ subject to

$$ a_1 v_{11} + a_2 v_{21} + \cdots + a_n v_{n1} = x$$

$$ a_1 v_{12} + a_2 v_{22} + \cdots + a_n v_{n2} = y$$

and for all $j$ from $1$ to $n$,

$$a_j \leq b_j$$ $$a_j \geq -b_j$$ $$b_j \geq 0$$

Here the $a_j$ and the $b_j$ are your integer variables. There are integer linear programming solvers available online, some free and some for a cost.

user2566092
  • 26,450
  • This is all really interesting to study, I was really underestimating the complexity of the problem. Thanks for the help. – jen-rose Aug 10 '14 at 20:30
0

@user2566092 is correct that this is an NP-hard integer linear programming problem.

I came across a similar problem and used Python (scipy) to solve it. Here is an example using the vectors in the original question and a target coordinate of [1,6].

import numpy as np
from scipy.optimize import linprog

A_ub = -np.array([[0, 2], [1, 1], [2, 0], [1, -1]]).T b_ub = -np.array([1, 6]) # target [x, y] c = np.ones(A_ub.shape[1]) result = linprog(c, A_ub, b_ub, integrality=1) print(result.x) # [3. 1. 0. 0.]

ekare
  • 3
  • 2
0

This is an instance of the least-norm problem in the $1$-norm with integrality constraints. Rephrasing, given a fat matrix ${\bf A} \in \Bbb R^{m \times n}$ and a vector ${\bf b} \in \Bbb R^m$,

$$\begin{array}{ll} \underset{{\bf x} \in \Bbb N^n}{\text{minimize}} & \| {\bf x} \|_1\\ \text{subject to} & {\bf A} {\bf x} = {\bf b} \end{array}$$

which can be easily rewritten as an integer linear program. In essence, one is looking for the point in the solution set of the linear system ${\bf A} {\bf x} = {\bf b}$ that is closest to the origin in the $1$-norm.


Related