3

I try to use CVX toolbox to do "low rank approximation" work. The code is as follows:

r = 2; % the rank;
N = 32; % the dimension
M = 32;
a = randn(N,r);
b = randn(M,r);
X = a*b'; % low rank Matrix;
A = rand(20,N);
Y = A*X;
% low rank approximation using nuclear norm
cvx_begin
variable Xe(N,M)
minimize norm_nuc(Xe);
subject to A*Xe == Y;
cvx_end

Then Matlab tells me that

number of iterations = 8 primal objective value = 5.75866738e+01 dual objective value = 5.75866738e+01 gap := trace(XZ) = 3.73e-08 relative gap = 3.21e-10 actual relative gap = 3.49e-10 rel. primal infeas (scaled problem) = 6.90e-11 rel. dual " " " = 1.50e-12 rel. primal infeas (unscaled problem) = 0.00e+00 rel. dual " " " = 0.00e+00 norm(X), norm(y), norm(Z) = 8.4e+01, 1.6e+00, 4.2e+00 norm(A), norm(b), norm(C) = 6.0e+01, 1.6e+02, 5.1e+00 Total CPU time (secs) = 1.75 CPU time per iteration = 0.22 termination code = 0

DIMACS: 4.9e-10 0.0e+00 3.8e-12 0.0e+00 3.5e-10 3.2e-10 Status: Solved

Optimal value (cvx_optval): +57.5867

Obviously, CVX works well and the job has been done. However, the estimated results "Xe" does not equal to the original matrix "X". Why?

Some papers have proven that "Xe" should be equal to "X", such as "Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization,Benjamin Recht, Maryam Fazel, Pablo A. Parrilo, 2008".

cvxfan
  • 31
  • Hello again :-) Note that papers like Recht et. al. prove $X_e=X$ only under certain circumstances. It's quite likely that your particular model does not satisfy their circumstances. But I am hoping others here have more experience than I do, in case I am missing something. – Michael Grant Nov 24 '14 at 04:16
  • Thank you very much. I have read the paper carefully and the "certain circumstances" is not strict in this paper. Actually, if the matrix A satisfies some conditions, such as Theorem 3.2, Xe should be equal to X. I think the matrix "A" in the above codes satisfies this condition. Also, I find that the "matrix completion" problem can be solved well in above codes, which also informs that the matrix A is right – cvxfan Nov 24 '14 at 04:55
  • Theorem 3.2 does not guarantee that the solution to the convex optimization problem gives you $X_e=X$. It only says that if you can find a rank-$r$ solution, it is unique. But even so, I remain skeptical. Your linear operator is not in the general form used by that article. It should be of the form A*X(:)==b, where A is of appropriate size. It is not a truly randomly generated linear operator in the sense discussed. – Michael Grant Nov 24 '14 at 13:13
  • OK, I will use A*X(:)==b to be the constraint in this problem and try some matrix A in different size. later I will give the simulation results.Thanks very much. – cvxfan Nov 25 '14 at 01:06

1 Answers1

1

I think the main problem comes from the measurement you calculated:

A = rand(20,N);
Y = A*X; 

First of all the number of measurements needed to exact recovery is in order or O(rN) which is at least 2 * 32, so your number of measurements is too small to get a correct result.

Also the linear operator you use is not the general way that people used on low rank matrix recovery case, try y = A*X(:) , note that 'X(:)' is the operator that makes matrix X into a long vector by listing the columns of it. So you should make A being a matrix that match the dimension

r = 2; % the rank;
n = 32; % the dimension

a = randn(n,r);
b = randn(n,r);
X = a*b'; % low rank Matrix;

m=fix(n^2-1);
A = randn(m, n^2);
y = A*X(:);
% low rank approximation using nuclear norm

cvx_begin
variable Xe(N,M)
minimize norm_nuc(Xe)
subject to
A*Xe(:)==y
cvx_end

error = norm(X-Xe)
ElviraL
  • 11