First, let's flip the problem (literally).
Begin by noticing that when a function grows fast, its inverse grows slow. This is because, at least pictorially, the graph of the inverse function is the graph of the original function reflected about the line $x=y$ (since $f(x) = y$ implies that $f^{-1}(y) = x$ so the roles of $y$ and $x$ get switched).
That is, if you have two invertible functions $f$ and $g$ such that $f$ is growing faster than $g$, then $f^{-1}$ will grow slower than $g^{-1}$, as observed by flipping about the line $y=x$.
Thus, a strategy to find a describable function which grows slower than each of $^k \log(x), k \geq 1$ is :
- Find a function that grows faster than the inverses of each of these functions, and
- Take the inverse of that function.
Now, the inverse of $^k \log(x)$ is just $e^{e^{\ldots^{e^x}}}$ where there are $k$ total $e$s. For example, the inverse of $^3 \log(x)$ would be $e^{e^{e^x}}$. Note that by convention, these exponentials are evaluated from the highest power to the lowest power, e.g. $e^{e^x} = e^{(e^x)}$.
So, do you know a function which grows faster than $e^x, e^{e^x}, e^{e^{e^x}}$ and so on? I think you see the obvious problem here : for "repeated multiplication", we found that exponentiation $e^x$ "beat" every form of multiplication asymptotically. Now, exponentiation is at the level of multiplication, so what operation takes on the role of exponentiation?
The answer comes from thinking about how we derived exponentiation from multiplication.
The origin of exponentiation was "we multiply a number repeatedly by itself : that leads to exponentiation".
Well, what if we say "we exponentiate a number repeatedly by itself : that leads to ...".
Unfortunately, when we say "multiply a number repeatedly by itself : that leads to exponentiation", we managed to work around what it might mean to multiply a number by itself a "rational", or "$\pi$" number of times using continuity arguments, and then conclude the value of $a^x$ for any positive real number $x$. The analogous question of "how one can exponentiate something repeatedly by itself $\pi$ times" is not described as "difficult" but "disputed" : it can be done in multiple ways.
However, for the positive integers, the $...$ in the second last paragraph has been studied : it's called tetration.
More precisely, let $N \geq 1$ be an integer. Define the function $$
f(N) = e \uparrow \uparrow N := e^{e^{\ldots^{e}}}
$$
where there are $N$ total $e$s. This would be exponentiation of the constant $e$, repeated $N$ times. Think now about what might happen if $N$ was a rational number or, worse, an arbitrary positive real number.
One solution of Hooshmand et al. is suggested here. It guarantees monotonicity of the function $f(x)$ for large enough $N$. So we will use Hooshmand's function with $a=e$ : call it $f(x)$.
Let $k \geq 0$ be arbitrary. For simplicity, let $\exp_k(x) = e^{e^{\ldots^{e^x}}}$ where there are $k$ total $e$s, and $\exp_0(x) = x$. We can show that $\lim_{x \to \infty} \frac{f(x)}{\exp_k(x)} = +\infty$, where there are $k$ $e$s.
Infact, observe that $$
\frac{\ln(f(x))}{\ln(\exp_k(x))} = \frac{f(x-1)}{\exp_{k-1}(x)} \implies \lim_{x \to \infty} \frac{\ln(f(x))}{\ln(\exp_k(x))} = \lim_{x \to \infty}\frac{f(x-1)}{\exp_{k-1}(x)}
$$
So we only need the following lemmas :
$\frac{f(x)}{x} \to +\infty$ as $x \to \infty$. (or $\frac{f(x-1)}{x} \to \infty$ as $x \to \infty$ to yield immediate application)
If $g(x)$ is a function such that $g(x) \to \infty$ as $x \to \infty$, and $f(x)$ is a function such that $\frac{\ln(f(x))}{\ln(g(x))} \to \infty$ as $x \to \infty$, then $\frac{f(x)}{g(x)} \to \infty$ as $x \to \infty$.
The first might require some careful analysis along the real numbers using monotonity and Hooshmand's conditions, but the second one is quite easy to prove.
Now, we can consider the inverse of $f(x)$, which is the Superlogarithm. Since Hooshmand's function is strictly increasing after some period, it does admit such an inverse.
Observe that $f(x) = e^{f(x-1)} = e^{e^{f(x-2)}} = \exp_k(f(x-k))$ for any $k \geq 1$, so
$$
\lim_{x \to \infty} \frac{\exp_k^{-1}(x)}{f^{-1}(x)} = \lim_{x \to \infty} \frac{\exp^{-1}(f(x))}{f^{-1}(f(x))} = \frac{f(x-k)}{x} \to +\infty
$$
as $x \to \infty$, since we can write $$
\frac{f(x-k)}{x} = \frac{f(x)}{x}\prod_{i=1}^k \frac{f(x-i)}{f(x-i+1)} = \frac{f(x)}{x}\prod_{i=1}^k \frac{e^{f(x-i+1)}}{f(x-i+1)}
$$
and each term now goes to infinity.
Hence, the super logarithm grows slower than any iterated logarithm, fulfilling the requirements of the function you desire. As mentioned in the Wikipedia pages, tetration and superlogarithms also find their way into Ramsey Theory and algorithm complexity, so they are of interest outside of this question.