For those who struggle as much as I did to understand the derivation of the covariance matrix, here's some supplementary good ol' fashioned mathematical rigour:
Start by letting $y_t = x_t - \sqrt{1-\beta_t}x_{t-1}$. Solving this for $x_t$ and expanding it recursively, we get
\begin{align}
x_t &= \sqrt{\beta_t} y_t + \sqrt{1-\beta_{t}} x_{t-1} \\
&= \sqrt{\beta_t} y_t + \sqrt{1-\beta_{t}} \left( \sqrt{\beta_{t-1}} y_{t-1} + \sqrt{1-\beta_{t-1}} \left(... + \sqrt{1-\beta_0}x_0\right)...\right)\\
&= \sqrt{\beta_t} y_t + \sqrt{1-\beta_{t}} \sqrt{\beta_{t-1}} y_{t-1} + ... + \sqrt{1-\beta_t}\cdot...\cdot\sqrt{1-\beta_0}x_0
\end{align}
We can see that the $k$th term in the sum, we have a $y_k$ and a coefficient consisting of a $\sqrt{\beta_k}$ and a product of all $\sqrt{1-\beta_l}$ where $l = k+1 ... t$.
Note that for the term where $k=t$, we have a subtlety; $\beta_{t+1}$ is not defined. This term does instead only contain the coefficient $\sqrt{\beta_t}$. One way of formalizing this is to define
$$
\alpha_k =
\left\{
\begin{array}{ll}
\beta_k, & k \leq t\\
\frac{\beta_t}{\beta_t - 1}, & k = t+1
\end{array}
\right.
$$
For the $t$th term, we then get the coefficient
$$
\sqrt{\beta_t} \sqrt{1-\alpha_t}\sqrt{1-\alpha_{t+1}} = \sqrt{\beta_t} \frac{\sqrt{1-\beta_t}}{\sqrt{1-\beta_t}} = \sqrt{\beta_t}
$$
This results in the series
\begin{align}
x_t &= \prod_{k=1}^t\sqrt{1-\beta_k}x_0 + \sum_{k=1}^t \sqrt{\alpha_k} y_k \prod_{l=k+1}^t \sqrt{1-\alpha_l}
\end{align}
The conditional distribution of $x_t$ given $x_0$ then has a covariance matrix with contributions only from
$$
\sum_{k=1}^t \sqrt{\alpha_k} y_k \prod_{l=k+1}^t \sqrt{1-\alpha_l}
$$
As explained in Christophe's answer, the $y_k$s are iid with unit variance. This gives
\begin{align}
Cov(x_t|x_0) &= I \sum_{k=1}^t \left( {\alpha_k} \prod_{l=k+1}^t (1-\alpha_l)\right)
\end{align}
Adding and subtracting $\prod_{l=k+1}^t (1-\alpha_l)$ inside the parentheses gives
\begin{align}
Cov(x_t|x_0) &= I \sum_{k=1}^t \left( \prod_{l=k+1}^t (1-\alpha_l) -\prod_{l=k+1}^t (1-\alpha_l) + {\alpha_k} \prod_{l=k+1}^t (1-\alpha_l) \right) \\
&= I \sum_{k=1}^t \left( \prod_{l=k+1}^t (1-\alpha_l) - (1-\alpha_k)\prod_{l=k+1}^t (1-\alpha_l) \right) \\
&= I \sum_{k=1}^t \left( \prod_{l=k+1}^t (1-\alpha_l) - \prod_{l=k}^t (1-\alpha_l) \right)
\end{align}
Notice that all the $\prod_{l=k+1}^t (1-\alpha_l)$ will cancel out with the $- \prod_{l=k}^t (1-\alpha_l)$ for the next term in the sum. We are therefore only left with $- \prod_{l=k}^t (1-\alpha_l)$ for $k=1$ and $\prod_{l=k+1}^t (1-\alpha_l)$ for $k=t$. Note that this trick cannot be applied to infinite series, but in this case it is fine, as $t$ is finite, and we are only looking at what happens when $t$ increases. This results in
\begin{align}
Cov(x_t|x_0) &= I \left(\prod_{l=t+1}^t (1-\alpha_l) - \prod_{l=1}^t (1-\alpha_l)\right) \\
&= I \left((1-\alpha_{t+1})(1-\alpha_t) - \prod_{l=1}^t (1-\alpha_l) \right) \\
&= I \left(\left(1-\frac{\beta_t}{\beta_t - 1}\right)(1-\beta_t) - \prod_{l=1}^t (1-\beta_l) \right)\\
&= I \left(1 - \prod_{l=1}^t (1-\beta_l) \right)
\end{align}
From this, we see that the only requirement for the $\beta$-schedule is that
$$
\lim_{t\to\infty} \prod_{l=1}^t (1-\beta_l) = 0
$$
which is a fairly soft requirement, as 'most' infinite products with factors less than $1$ are $0$.
And so if this is satisfied, $x_t \sim \mathcal{N}(0,I)$ as $t \to \infty$.