3

I am implementing the algorithm in this paper. However, I have run into a problem with my solver for the linear program. I need to solve a linear program where I minimise the $1$-norm of a vector subject to the constraint that the vector, when multiplied by some matrix equals a known set of measurements, i.e.,

$$ \min \lVert x \rVert_{1} \text{ s.t. } A x = y. $$

The difficulty I'm facing is that $A$ is not necessarily a square matrix and, thus, solvers like $\ell_1$-magic using the primal dual algorithm won't work. Can anyone please suggest an algorithm/solver that will solve this type of convex optimisation problem?

Tom Kealy
  • 282

5 Answers5

3

Compressive sensing is all about non-square matrices, as the point is that we are dealing with the "undersampling" regime where we have less measurements than the ambient dimension.

I thought l1-magic works just fine. Did you try it? If you want a generic convex optimization toolbox, you might consider cvx also http://cvxr.com/cvx/.

Also related to compressed sensing, depending on what you want there's a jungle of solvers here (definitely way more than what you want): https://sites.google.com/site/igorcarron2/cs#reconstruction


If you use Cvx:

cvx_begin 
  variable z(N);
  minimize norm(z,1);
  subject to 
        Az==b;
cvx_end

This minimizes the $l^1$ norm over $Az=b$, and $z$ will hold the minimizer.

Evan
  • 3,891
  • l1-magic works great whn A is square and positive definite, but the problem I'm solving has matrices which are not square and so l1-magic fails. – Tom Kealy Aug 21 '13 at 15:56
  • That's strange, they seem to be using it here http://users.ece.gatech.edu/~justin/l1magic/examples.html – Evan Aug 21 '13 at 15:56
  • Oh, you might have been using the wrong function? It seems they reduce certain problems to the symmetric square case. – Evan Aug 21 '13 at 16:00
  • I noticed they orhtogonalise (sp?) the matrix there. When I do that all I get is a straight line. – Tom Kealy Aug 21 '13 at 16:21
  • Could you post what code you used when applying l1eq_pd? Also I think orthogonalizing is not a requirement. Just throw it into l1eq_pd. – Evan Aug 21 '13 at 16:43
1

L_1 magic must be failing for a different reason than the fact that A is non-square. L_1 magic is specially geared toward solving compressive sensing problems for which A is rectangular (non square).

If it doesn't work out for you, here are potential reasons:

  1. you are not trying to find the sparsest solution to this system of equation. i.e. the solution given by L_1 magic is not the one you are seeking.
  2. A doesn't satisfy a necessary and sufficient condition for the recovery of sparse unknowns.
  3. the number of rows of A is not large enough to find the sparsest solution to your problem

hope this helps.

  • Hi Igor, thanks for the help (and your blog!). I fixed the issue, which had nothing to do with squareness of my matrix. – Tom Kealy Aug 26 '13 at 14:00
0

Please refer to the following page for the correction of this line of code in 1leq_pd.m:

http://compsens.eecs.umich.edu/sensing_tutorial.php.

Once the code is updated, it works for me.

W. H.
  • 1
0

What was the issue in the end?

I'm having a similar problem. I'm trying to use l1 magic to reconstruct an image from a single pixel camera I've developed. The test functions used are random binary patterns projected onto the object scene, so each pattern is represented as a row vector of 0's and 1's and form the rows of the test function matrix A. (To later be multiplied by a transform matrix to express the image in a basis where the coefficients are sparse)

However, this matrix is not positive definite, so the basis persuit function throws out an error. This is confusing me as most of the literature promotes using random test functions. Is there something wrong here? Do you have to ensure your "random" test functions eventually form a positive definite matrix?

Any help appreciated.

Andrew
  • 11
  • Is the matrix complex? – Tom Kealy Jul 17 '14 at 20:04
  • No, the matrix is real. Its rows are vectors representing each binary pattern, so the matrix only contains 1's and 0's. Also since the test matrix isn't square (which it won't be since we're undersampling), we cant expect it to be positive definite anyway. So I'm a little confused as to why/what matrix it requires to be positive definite.. – Andrew Jul 17 '14 at 20:42
  • My problem was using a Fourier matrix when I didn't need to. I think that the sensing matrix needs to be positive definite as part of the reconstruction algorithm (it can't be singular or there's no hope of recovery). I think Bernoulli(p) matrices satisfy the RIP (you should check this), so maybe the transform you apply isn't unary (is it a wavelet transform?). – Tom Kealy Jul 17 '14 at 20:53
  • I'm fairly certain that definiteness is required for reconstruction - but you should check the RIP for your matrices. – Tom Kealy Jul 17 '14 at 20:54
  • Can N x M matricies even be positive definite? (as they're not square?). I'm using a Discrete Cosine Transform, but the function still throws out the same error if I leave that out. – Andrew Jul 17 '14 at 21:06
  • What's the condition Number of the matrix? What does [~ p] = chol(A) return? – Tom Kealy Jul 17 '14 at 21:09
  • It's >1, which means it's not positive definite right? – Andrew Jul 17 '14 at 21:24
  • AA^T needs to be positive definite. Have you got that M > 4*sparsity of X? You may not have enough rows – Tom Kealy Jul 17 '14 at 21:30
  • I see, I'll check that. That's also a good point, I'll up the number of rows to the number of pixels in the image. I'll try all that when I'm back in work tomorrow and will report back, thanks for your help! – Andrew Jul 17 '14 at 21:41
  • I've added some code to try and figure out what you're doing. – Tom Kealy Jul 18 '14 at 13:16
  • Hi Tom, thanks for that. I did indeed manage to figure out what was wrong. As I mentioned, the test matrix consisted of stacks of rows which represented a binary bitmap projected onto an object scene. The dataset I was using was originally captured to use in an iterative ghost imaging algorithm. To reduce noise, each binary projection was concecutively followed by its inverse. So in reality, the test matrix wasn't completely random, as each pair of rows was essentially the inverse of each other. Der me! – Andrew Jul 18 '14 at 15:33
  • I've since used another previous dataset in which the binary projections are all random, and l1eq_pd works fine. I've succesfully managed to reconstruct an image using a computationally produced dataset, but have not yet got it to work for my real data set. The resulting DCT coefficents vector has about 500 nonzero elements compared to the 2500 pixels in the image. I'm only using 1000 measurements, so I guess this isn't enough... – Andrew Jul 18 '14 at 15:37
  • The rule of thumb is #rows > 4*sparsity of data. Works like a charm. – Tom Kealy Jul 18 '14 at 16:04
  • Hi Tom, I successfully managed to reconstruct an image using my system. http://oi59.tinypic.com/m99i6t.jpg The image in the above link was reconstructed with 1000 measurements (there are 2500 pixels in the image), and I can still get fairly good reconstruction with under 500. Any ideas on how to speed up the l1 magic solver? The reconstruction takes about 20s on my laptop. I've tried converting the function to a mex file, but if anything that takes slightly longer...

    Andrew

    – Andrew Aug 15 '14 at 19:33
0

I've tried to recreate the code you were describing and got this below:

X = zeros(1,512);
positions = randi(512,[1,30]);
X(positions) = 1;
dctX = dct(X);
B = binornd(1,0.5,150,length(X));
y = B*X';
x0 = B'*y;
xhat = l1eq_pd(x0, B, [], y, 1e-3);

This generates a vector with 30 random spikes, then applies a dct and a random beroulli(0.5) matrix before solving an l1 program.This code works without a hitch for me.

Did you find out what was wrong?

Tom Kealy
  • 282