I am trying to bound the gain (or operator norm) of a convolutional operator.
For 1-D, applying Young's Inequality is straightforward:
$ ||f * g||_r \le ||f||_p ||g||_q $,
with $ 1 \le p,q,r \le \infty$, $ \frac{1}{p} + \frac{1}{q} = \frac{1}{r} + 1 $ and $ f \in L^p(\mathbb{R}^d), g \in L^p(\mathbb{R}^d) $.
So if I take my conv "filter" (or kernel) to be $f$ and the input to that filter ($\mathbf{x}$) is $g$, using the inequality above I can write:
$ || f * \mathbf{x}||_2 \le ||f||_1 ||\mathbf{x}||_2 $
However, I am unsure about how to generalize this to the 2-D or 3-D case for a filter bank, where the convolutional filters in each dimension are not separable (so one cannot simply apply the 1-D bound iteratively for each dimension).
E.g., for 2-D signals in the continuous setting, the convolution would be written as
$ f(x,y) * g(x,y) = \int \int f(u,v) g(x-u, y-v) du dv $,
where the limits of the integrals are from $-\infty$ to $\infty$.
Is it valid to think of "reshaping" a 2-D or N-D convolution to be in 1-D (something like reshaping a 3x3 matrix to be a 9x1 vector in finite dimensions) and then apply Young's Inequality for finite dimensional vectors?
As a concrete example in finite dimensions: What if there is a filter bank of $K$ convolutional filters, each being a 2-D filter $ w \times h$, that operates on 2-D input tensors of size $ W \times H$ ?