Proposition.
Let $\{X_k\}$ be a sequence of mutually independent random variables taking either $1$ or $-1$ with
$\mathbb{E}(X_k) = Ck^{-\alpha}$ for each $k\ge 1$,
where $C$ and $\alpha$ are positive constants.
Then, almost surely,
$$
\liminf_{k\to \infty} \sum_{\ell=1}^k X_\ell =
\begin{cases}
+\infty & \text{ if } \alpha < \dfrac{1}{2}, \\[2ex]
-\infty & \text{ if } \alpha > \dfrac{1}{2}.
\end{cases}
$$
Intuition. Let $\{U_k\}$ be a sequence of i.i.d. random variables uniformly distributed on $\{-1, 1\}$. Then $\{X_k\}$ and $\{U_k\}$ seems "close to each other", and
$$
\mathbb{E} \left(\sum_{\ell=1}^k (X_\ell - U_\ell)\right) = C\sum_{\ell=1}^k\ell^{-\alpha}.
$$
Since $\sum_{\ell=1}^k U_\ell$ oscillates between $-\sqrt{k}\log\log k$ and $\sqrt{k}\log\log k$ according to the law of the iterated logarithm, we hope that its negative drift is dominated by $\sum_{\ell=1}^k (X_\ell - U_\ell)$ when $\alpha < 1/2$, and dominates the latter when $\alpha > 1/2$.
Proof. Define
$$
S_k = \sum_{\ell=1}^k X_\ell, \quad k \ge 1.
$$
Case 1. $\alpha <1/2$. Since $-1\le X_k \le 1$ and $\mathbb{E}(S_k) \ge Ck^{1-\alpha}$ for each $k\ge 1$, Hoeffding's inequality yields
$$
\mathbb{P}(S_k \le Ck^{1-\alpha}/2) \le \mathbb{P}(|S_k - \mathbb{E}(S_k)| \ge Ck^{1-\alpha}/2) \le 2\exp(-C^2k^{1-2\alpha}/8), \quad k\ge 1.
$$
Thus $\{\mathbb{P}(S_k \le Ck^{1-\alpha}/2)\}$ is summable. Hence the Borel-Cantelli lemma tells us
$$
\mathbb{P}(S_k \le Ck^{1-\alpha}/2 \text{ infinitely often}) = 0,
$$
which justifies the desired conclusion.
Case 2. $\alpha > 1/2$ (or, more generally, $\sum_{k=1}^\infty \mathbb{E}(X_k)/\sqrt{k\log\log k} < +\infty$). Let $\{Y_k\}$ be a sequence of mutually independent random variables taking either $0$ or $1$ with
$$
\mathbb{P}(Y_k = 1) = \frac{k^\alpha}{C + k^\alpha}, \quad k\ge 1.
$$
In addition, we require that the sequences $\{X_k\}$ and $\{Y_k\}$ are mutually independent.
Define
$$
U_k = (X_k +1)Y_k - 1 \quad \text{and} \quad V_k = (X_k +1)(1-Y_k), \quad k\ge 1.
$$
Then $\{U_k\}$ and $\{V_k\}$ are both mutually independent sequences. By the independence between $\{X_k\}$ and $\{Y_k\}$, it is easy to check that
$$
\mathbb{P}(U_k = 1) = \frac{1}{2} = \mathbb{P}(U_k = -1), \quad k\ge 1,
$$
and
$$
\mathbb{P}(V_k = 2) = \frac{C}{2k^\alpha} = 1- \mathbb{P}(V_k = 0), \quad k\ge 1.
$$
Clearly, $X_k = U_k + V_k$, and hence
$$
S_k = \sum_{\ell=1}^k U_\ell + \sum_{\ell=1}^k V_\ell, \quad k\ge 1.
$$
With $M_k = \sqrt{2k\log\log k}$, the law of the iterated logarithm renders
$$
\liminf_{k\to\infty} \frac{1}{M_k}\sum_{\ell=1}^k U_\ell = -1
\quad\text{a.s.}
$$
On the other hand, by the monotone convergence theorem and the fact that $\mathbb{E}(V_k) = Ck^{-\alpha}$ for each $k\ge 1$ with $\alpha > 1/2$, we have
$$
\mathbb{E}\left(\sum_{k=1}^\infty \frac{V_k}{M_k}\right) = \sum_{k=1}^\infty \frac{\mathbb{E}(V_k)}{M_k} < +\infty,
$$
which implies that $\sum_{k=1}^\infty (V_k/M_k) < +\infty$ a.s., and hence Kronecker's lemma ensures
$$
\lim_{k\to \infty} \frac{1}{M_k} \sum_{\ell=1}^k V_\ell = 0 \quad \text{a.s.}
$$
Combining the above two limits, we have $\liminf (S_k/M_k) = -1$,
which implies the desired conclusion. $\quad \blacksquare$
Remarks.
When $\alpha < 1/2$, the proof above does not need $\{X_k\}$ to take vaues in $\{1,-1\}$. Its boundedness and the order of the expectations are sufficient.
When $\alpha > 1/2$ (or, more generally, $\sum_{k=1}^\infty [\mathbb{E}(X_k)]^2 < +\infty$), thanks to @Fnacool's excellent answer and in alignment with @Snoop's comment, we know that the law of the iterated logarithm indeed holds for $\{X_k\}$ under $\mathbb{P}$, namely
$$
\liminf_{k\to \infty}\frac{S_k}{\sqrt{2k\log\log k}} = -1
\quad \text{and}\quad
\limsup_{k\to \infty}\frac{S_k}{\sqrt{2k\log\log k}} = 1
\quad \mathbb{P}\text{-a.s.}
$$
This is because Kakutani's dichotomy theorem ensures that $\mathbb{P}$ is equivalent to $\mathbb{Q}$ (and hence they have the same sets of probability 1) with
$$
\mathbb{Q}(X_k = 1) = \frac{1}{2} = \mathbb{Q}(X_k = -1) \quad\text{i.i.d.},
$$
since
$$
\sum_{k = 1}^\infty
\log \int_{\mathbb{R}}
\sqrt{\frac{\mathrm{d}\mathbb{P}_k}{\mathrm{d} \mathbb{Q}_k}} \mathrm{d} \mathbb{Q}_k
= \sum_{k = 1}^\infty \log \frac{1}{2} \left[\sqrt{1+\mathbb{E}_\mathbb{P}(X_k)}+\sqrt{1-\mathbb{E}_\mathbb{P}(X_k)}\right] < +\infty,
$$
where $\mathbb{P}_k$ is the marginal of $\mathbb{P}$ corresponding to $X_k$, and $\mathbb{Q}_k$ is similar.
It is unclear to me what will happen if $\alpha = 1/2$. If you have an idea, you may share it on MathOverflow.
Update.
As pointed out on MathOverflow, $\{X_k\}$ indeed satisfies the following law of the iterated logarithm:
$$
\liminf_{k\to \infty}\frac{\sum_{\ell=1}^k[X_\ell - \mathbb{E}(X_\ell)]}{\sqrt{2k\log\log k}} = -1
\quad \text{and}\quad
\limsup_{k\to \infty}\frac{\sum_{\ell=1}^k[X_\ell - \mathbb{E}(X_\ell)]}{\sqrt{2k\log\log k}} = 1
\quad \text{a.s.}
$$
The lower limit implies
$$
\liminf_{k\to \infty} \sum_{\ell=1}^k X_\ell =
\begin{cases}
+\infty & \text{ if } \alpha < \dfrac{1}{2}, \\[2ex]
-\infty & \text{ if } \alpha \ge \dfrac{1}{2},
\end{cases}
$$
covering $\alpha = 1/2$. Indeed, $\liminf_k \sum_{\ell=1}^kX_\ell = -\infty$ if $\sum_{\ell=1}^k\mathbb{E}(X_\ell) = o(\sqrt{k\log\log k})$.
The key is to apply the correct version of the law of the iterated logarithm, not the one on Wikipedia as of 2024-09-11, which demands iid random variables with zero mean and unit variance, but the classical one by Kolmogorov, which handles any sequence of independent random variables with zero mean under a mild assumption on the magnitude of $X_k$. See also Theorems 7.1--7.3, Petrov 1995.