1

I have a matrix $\mathbf{A} \in \mathbb{R}^{2000 \times 2000}$ represented in memory by an array of $2000 \times 2000$ float32 elements and I also have $10$ arrays $\mathbf{E}^i \in \mathbb{R}^{2000 \times 2000}$.

I know, that there exists separation $\mathbf{A} = \mathbf{F} + \mathbf{E}$, where $\mathbf{F}$ represents foreground signal and $\mathbf{E}$ represents background signal. I know further, that $$\mathbf{E} = \sum_{i=0}^{9} \beta_i \mathbf{E}^i,$$ is a good approximation of the background signal. And I would like to uncover coefficients $\beta_i$.

Unfortunatelly, I don't know too much about foreground $\mathbf{F}$, for sure $<\mathbf{F}, \mathbf{E}^i> \neq 0$. So unfortunatelly $\beta_i = <\mathbf{E}_i, \mathbb{A}>$ or least square solution leads to very suboptimal results. I assume that $\mathbb{F}$ is zero or close to zero for at least 20% of the elements and that the texture of $\mathbb{E}$ is somehow inpainted into $\mathbb{A}$. To fit this texture I try to estimate $\beta_i$ by minimizing

$$ ||\mathbf{A} - \sum_{i=0}^{9} \beta_i \mathbf{E}_i ||_{TV},$$

hoping that I will remove dynamic features of the texture and obtain uncorrupted foreground signal. I have tried to attack the problem using https://www.cvxpy.org/ and ECOS solver. It is utilizing single processor only and it is very slow. I can define the problem somehow symbolically but I don't know what is happenning under the hood. Do you have any idea for fast algorithms to attack this problem and e.g. some fast implementations with wrappers in Python?

Royi
  • 10,050
VojtaK
  • 370
  • I created a very efficient formulation for your problem. If you share some images, I can create a code for it. – Royi Feb 13 '24 at 17:14

1 Answers1

1

I think a better formulation can assist with performance greatly.

Let's define:

$$ \boldsymbol{E} = \begin{bmatrix} \operatorname{Vec} \left( \boldsymbol{E}_{1} \right) & \operatorname{Vec} \left( \boldsymbol{E}_{2} \right) & \ldots & \operatorname{Vec} \left( \boldsymbol{E}_{10} \right) \end{bmatrix} $$

Then by defining $\boldsymbol{\beta} = {\left[ {\beta}_{1}, {\beta}_{1}, \ldots, {\beta}_{10} \right]}^{T}$ we can rewrite $\operatorname{Vec} \left( \sum_{1}^{10} {\beta}_{i} \boldsymbol{E}_{i} \right) = \boldsymbol{E} \boldsymbol{\beta}$.

Using the Anisotropic Total Variation we can construct the operator $\boldsymbol{D}$ such that:

$$ {\left\| \boldsymbol{A} \right\|}_{TV} = {\left\| \boldsymbol{D} \boldsymbol{a} \right\|}_{1} $$

Where $\boldsymbol{a} = \operatorname{Vec} \left( \boldsymbol{A} \right)$.
See Matrix Vector Multiplication Representation of Total Variation Function and How to Solve an Image Deblurring Problem by Variational Methods Using ADMM.

Then the problem becomes:

$$ \arg \min_{\boldsymbol{\beta}} {\left\| \boldsymbol{D} \left( \boldsymbol{a} - \boldsymbol{E} \boldsymbol{\beta} \right) \right\|}_{1} = \arg \min_{\boldsymbol{\beta}} {\left\| \boldsymbol{D} \boldsymbol{a} - \boldsymbol{D} \boldsymbol{E} \boldsymbol{\beta} \right\|}_{1} $$

Now, you can calculate $\hat{\boldsymbol{a}} = \boldsymbol{D} \boldsymbol{a}$ and $\hat{\boldsymbol{E}} = \boldsymbol{D} \boldsymbol{E}$ offline and then solve:

$$ \arg \min_{\boldsymbol{\beta}} {\left\| \hat{\boldsymbol{E}} \boldsymbol{\beta} - \hat{\boldsymbol{a}} \right\|}_{1} $$

Which is a Linear Fit with ${L}_{1}$ Norm. It a very well known problem with efficient solvers (Linear Programming, IRLS, etc...).

Royi
  • 10,050