(Preamble) In the book Representation Theory A First Course (Fulton, Harris), there is the following claim in the page 165 (written as an observation) without a proof:
The eigenvalues $\alpha$ occurring in an irreducible representation of $\mathfrak{sl}_3(\mathbb{C})$ differ from one another by integral linear combinations of the vectors $L_i - L_j \in \mathfrak{h}^*$
Prior to this claim, there is a quick derivation, where $X \in \mathfrak{g}_\alpha, v \in V_\beta, H \in \mathfrak{h}$ and $\mathfrak{g}$ is any Lie algebra, $\mathfrak{h}$ is a subspace of $\mathfrak{sl}_3(\mathbb{C})$ consisting of diagonal matrices, and $\alpha, \beta$ are eigenvalues (in this case linear functions on $H$) s.t. $\mathfrak{g}_\alpha = \{X \in \mathfrak{g}\mid \forall H \in \mathfrak{h}:[H, X] = \alpha(H)X\}$ and $V_\beta = \{v \in V\mid \forall H \in \mathfrak{h}:Hv = \beta(H)v\}$, where $V$ is to my knowledge just a vector space over some algebraically closed field.
The given derivation is: $H(X(v)) = X(H(v)) + [H, X](v) = X(\beta(H)v) + (\alpha(H)X)(v) = (\alpha(H) + \beta(H))X(v)$ by the bracket identity. The author(s) then state that
We see from this that $X(v)$ is again an eigenvector for the action of $\mathfrak{h}$ with eigenvalue $\alpha + \beta$
(Main question) It is perfectly clear to me that given any single eigenvalue $\beta$, we may jump to some other eigenvalues by the combining different roots $\alpha(H) = L_i - L_j$ for some $i, j$. What is unclear to me is that how do we know that we can jump from any eigenvalue $\beta_1$ to any other eigenvalue $\beta_2$ with the $L_i - L_j$s?