7

The ordinary Wiener lemma states that if $f(x):=\sum_{n \in \mathbb{Z}} a_n \exp(inx)$ ($\sum |a_n|<\infty$) and if $f(x)\neq 0$ everywhere, then $g:=1/f$ can also be written as $$g(x)= \sum b_n \exp(inx)$$ with $\sum_n |b_n|<\infty$ The proof of this uses the fact, that an element $u$ of a commutative Banach Algebra is invertible iff $h(u)\neq 0$ for each complex homomorphism.

Now, I have seen a similar result used in a paper - more specifically: substitute square- matrices in place of the coefficients $a_n$, i.e. consider functions of the form $$f(x):= \sum A_n \exp(inx)$$ Assume now that each $f(x)$ is an invertible matrix. Does it then follow (and if so, why) that $f^{-1}(x)$ can be expressed as $$f^{-1}(x):= \sum B_n \exp(inx)$$ for some matrices satisfying $\sum |B_n|<\infty $ I don't know how to proceed here, because in this case the Banach Algebra is not commutative. Can I still adapt the proof of the original Wiener Lemma somehow?

P.Jo
  • 839
  • I think that this question is suitable for MathOverflow. It might be better to wait some more days before posting it there. Do not forget to refer to the present question. – Jochen Jul 21 '23 at 08:01
  • 3
    A reference: Wiener's Lemma holds for coefficients from an arbitrary unital Banach algebra, see Cass, Trautner: An extension of Wiener's lemma to Banach algebra valued functions, Arch. Math. 40 (1983), 260-265. – Gerd Jul 22 '23 at 09:59

1 Answers1

1

We know that the Adjugate matrix satisfies the following relation: $$\forall A\in M_{n\times n}(\mathbb R),\; A \text{ adj}(A)=det(A)Id.$$ And the Adjugate matrix is defined using the determinat function, so we need to understand how it acts on $f(x)$.

Lemma: Let $f(x)=\sum A_n e^{inx}$ with $A_n\in M_k(\mathbb k)$ hence exists $\alpha_n\in \mathbb k$ such that $det(f(x))=\sum \alpha_n e^{inx}$.

proof: By the cauchy product we know that for given $g_j(x)=\sum b^{(j)}_n e^{inx}$, with $j=1,\ldots,N$ exists $b_n$ s.t. $\prod_{j=1}^Ng_j(x)=\sum b_n e^{inx}$. Now by definition of determinant: $$det(f(x))=\sum_{\sigma\in S_k}\text{sgn}(\sigma)f(x)_{1,\sigma(1)}f(x)_{2,\sigma(2)}\cdot\ldots\cdot f(x)_{k,\sigma(k)}=$$ $$=\sum_{\sigma\in S_k}\text{sgn}(\sigma)[\sum_n (A_n)_{1,\sigma(1)}e^{inx}]\cdot\ldots\cdot [\sum_n (A_n)_{k,\sigma(k)}e^{inx}]=$$ $$=\sum_{\sigma\in S_k}\text{sgn}(\sigma)\sum_n \alpha_n^{\sigma} e^{inx}=\sum_n(\sum_{\sigma\in S_k}\alpha_n^{\sigma})e^{inx}=\sum_n\alpha_ne^{inx}. \;\;\square$$ Now by the Wiener lemma, since $det(f(x))\neq 0 \;\forall x$, there exist $\beta_n\in \mathbb k$ s.t. $det(f(x))^{-1}=\sum \beta_n e^{inx}$.

So now we have only to work on the adjugate matrix, especially if we show that the $f(x)$'s cofactor matrix is of the form $\sum_n B_n e^{inx}$ we are done, because: $$f(x)^{-1}=det(f(x))^{-1}\text{ adj}(f(x))=\left[\sum \beta_n e^{inx}\right]\left[\sum_n B_n e^{inx}\right]^t=$$ $$=\left[\sum_m \beta_m e^{imx}\right]\left[\sum_n B_n^t e^{inx}\right]=*$$ This is a scalar multiplication and if we apply the cauchy product to every matrix's component we get: $$*=\sum_n C_n e^{inx},\;\;\text{ with } C_n=\sum_k B_{n-k}^t\beta_k. $$

cofactor matrix study: Let $C(x)$ be the $f(x)$'s cofactor matrix, by definition $C(x)=((-1)^{a+b}M_{ab}(x))$ where $M_{ab}(x)$ is determinant of the $(k-1)\times(k-1)$ matrix that results from deleting row $a$ and column $b$ of $f(x)$ (it's still of the form $\sum A_n'e^{inx}$). For the lemma we know that exist $\alpha_n^{(a,b)}\in\mathbb k$ s.t. $\;M_{ab}(x)=\sum \alpha_n^{(a,b)}e^{inx}$, hence: $$C(x)=\sum C_n e^{inx},\; \text{ with } (C_n)_{ab}=(-1)^{a+b}\alpha_n^{(a,b)}. $$ So for the previous argument we can conclude.

Bongo
  • 1,320