First, your question has a trivial answer, there is no general step size boundary at $1$, if you have an ODE $y'(t)=f(y(t))$ and consider another function $u(s)=y(sT)$, then this new function satisfies $u'(s)=Ty'(sT)=Tf(u(s))$. The error of the numerical solution however is independent of the scaling of the time (if the integration interval is scaled likewise), by varying $T$ you can create instances of the problem that have numerically small and large step sizes for the same sequence of $y$ values. Think about a solar system simulation set up to use days/astronomical units and then rescale to have years/lightyears or seconds/meters, the time step will remain about the same for the same relative error tolerance, but the number that represents it will be drastically different.
Secondly, about truncation errors and expansions. The mentioned methods are designed for variable step sizes, having a marching method of order 4 and an error estimating embedded method of order 5 for RKF45 or DoPri45.
A one-step method for $y'=f(y)$ (autonomous for simplicity, not really required) can be summarized as
$$
y_{n+1}=y_n+h_n\Phi_f(y_n,h_n)
$$
The local truncation error has order $p$ if for an exact solution $y$ of the ODE one gets
$$
y(x_n+h_n)=y(x_n)+h_n\Phi_f(y(x_n),h_n)+h_n^{p+1}R_p(y(x_n),h_n)
$$
The expression $R_p$ is in first order a polynomial expression in $f$ and the derivatives of $f$ up to order $p$. If one considers some bounded region around the exact solution close to $y(x_n)$, this expression is bounded by some constant $M_p(y(x_n))$.
The adaptive step size controller now attempts to estimate the truncation error, for instance using the embedded error estimator, and regulate the step size so that $M_p(y(x_n))h_n^p$ or $M_p(y_n)h_n^p$ is about constant, at a level given by the tolerances, say $C\varepsilon$, where $ε$ is the desired global error.
Then the local errors sum up to the global error
$$
e(x_n)=Cε\sum_{k=0}^{n-1} h_ke^{L(x_n-x_k)}\approx\frac{e^{L(x_n-x_0)}-1}{L}Cε
$$
where $L$ is a Lipschitz constant of $f$. The factor $e^{L(x_n-k_k)}$ is an upper bound for the magnification of the local truncation error as part of the global error at $x=x_n$, this is similar to the Grönwall lemma in the case of a linear differential inequality.
In all that means that the step sizes used mainly depend on the size of $f$ and its derivatives, and then also on the method as that determines the concrete shape and coefficients inside $R_p$. For different methods of different orders there can be some variability in the magnitudes which can make their step size selections at the larger end of the range of sensible error tolerances rather unrelated.