Short version
Is there a duality gap between a QCQP problem and the SDP problem obtained through Lagrangian relaxation?
A paper I'm studying is using this fact, but I cannot achieve the authors' results.
Longer version
I've been trying to reproduce the results from a paper entitled "Supervised Rank Aggregation" [pdf] that devises a formalism to aggregate ranks as a non-convex optimization problem, then transforms it to a non-convex QCQP problem and finally to a convex SDP problem through Lagrangian relaxation.
Since I could not achieve the results of the authors, I dug deeper in the proof and I am not 100% sure about a part of it.
The idea is to give a weight to each ranking so that the resulting rank is as close as possible to a target ranking, minimizing the error.
For example, given only one ranking input: [2, 1] the target ranking [2,1], the algorithm should return the list of weights [1] with error 0. This is far away from what I get: error is close to 0 (actually 3.17e-5) but the weight is [-0.448].
I implemented it in python, using picos and cvxopt to solve the SDP problem. This gist is the source code
Usage is simple:
>>> mc = MarkovChain(columns=[[2,1]], target=[2,1])
>>> alpha, x, t = mc.solve()
I know the QCQP formulation (8.2.2 in the paper) is correct. The relaxation seems fullproof too. Courses available on the Internet[1] explain the maths behind it. The SDP formulation (8.2.5) seems correct.
Then the authors claim the QCQP problem is strictly feasible (there is a solution in the interior of the domain) and therefore the Slater condition for the SDP problem holds, so there is no duality gap, and the solution of the dual is the solution of the primal.
Here is my question: is the last affirmation true? I am not sure and it seems like the most plausible reason why I cannot reproduce any result.
Thank you very much.
1: google "Notes on relaxation and randomized methods for nonconvex QCQP", §2.2