In looking into the problem of low-rank matrix completion / relaxations of the general problem to derive exact solutions, many papers cite that the original formulation is NP-hard but I cannot find a proof of this fact. The problem is as follows.
Given a matrix $M \in \mathbb{R}^{n \times m}$ of rank $r$ (again, assumed to be low) and we only have a partial observation of the matrix entries, say $\Omega \subseteq [n] \times [m]$ we want to find an $X$ that solves
$$ \begin{array}{ll} \underset{X}{\text{minimize}} & \mbox{rank}(X)\\ \text{subject to} & X_{ij} = M_{ij}, \quad \forall ij \in \Omega \end{array} $$
In particular, the following papers mention this result in the abstract or introduction without further elaboration (see R2009 and HMRW2014). Is there a well-known proof of this fact or easy reduction to understand the result?