The proof for omitted variable bias is pretty simple, via here:
Assume the true model is $$Y = X\beta + Z\delta + U$$
and we estimate naively $$Y = X\beta + W$$
The OLS estimate of the incorrectly specified model is then $$\beta = [X^TX]^{-1}X^TY = [X^TX]^{-1}X^T[X\beta + Z\delta + U] = \beta + [X^TX]^{-1}X^T[Z\delta + U]$$
And taking the expectation conditional on $X$ (assuming $U$ is mean zero and independent of $X$): $$\mathbb{E}[\beta\vert X] = \beta + [X^TX]^{-1}\mathbb{E}[X^TZ\vert X]\delta$$
The standard proof then shows that our estimate of $\beta$ will be biased if $X$ and $Z$ are correlated.
But what if the mean of $X$ and $Z$ are both nonzero? For example, assume $Z$ is independent of $X$ and that $\mathbb{E}X\neq 0 \neq \mathbb{E}Z$.
Then we have $$\mathbb{E}[\beta\vert X] = \beta + [X^TX]^{-1}\mathbb{E}[X^TZ\vert X]\delta = \beta + [X^TX]^{-1}X^T\mathbb{E}[Z]\delta \neq \beta $$
Doesn't that mean, then, that our estimate of $\beta$ can be biased even if the omitted variable is independent of the other regressors if $\mathbb{E}Z>0$