If $X_n$ have distribution $N(0, a_n)$ with $\sum_{n=1}^\infty a_n^b < \infty$ for some $b > 0$, then $X_n$ converge almost surely to $0$.
I was able to show (for a previous part of the same problem) that $lim_{n \to \infty}a_n = 0$ implies convergence in probability to 0, and I'm reasonably sure that $\sum_{n=1}^\infty a_n^b < \infty$ for some $b > 0$ implies $lim_{n \to \infty}a_n = 0$, so I know this must at least converge in probability (hence, a subsequence converges almost surely) to 0. For almost sure convergence, I would like to make an argument from the Borel-Cantelli lemma, i.e., show that $\sum_{n=1}^\infty P(|X_n| > \epsilon) < \infty$, but when I try to do this by translating the probability into something in terms of the cdf of $|X_n|$ (the half-normal distribution) I consistently get inequalities pointing the wrong way since I'm looking at $1 - F_{|X_n|}(\epsilon)$. Is there a better way to handle this? Note that the $X_n$ are not stated anywhere to be independent.