For any $n\times n$ matrix $A$ we can find an eigenvalue $\lambda$ (although we may have to use complex numbers). Then there is an eigenspace $E_\lambda$ of vectors of dimension $0<m\leq n$ corresponding to $\lambda$. Choose a basis $\mathcal{B}=\{\textbf{x}_1,\dotsc,\textbf{x}_m\}$ for this eigenspace, and consider how the matrix $B$ acts on those basis vectors, which gives you a new representation $B'$ of (the restriction of ) $B$ relative to this basis.
$$B'=B|_{E_\lambda}=\begin{pmatrix}
B\textbf{x}_1 & \dotsc & B\textbf{x}_m
\end{pmatrix}$$
A priori, since there are $m$ vectors with $n$ components making up the columns of this matrix, it would appear to be an $n\times m$ matrix, which need not be square. However, as you have already observed in your question, we know that
$$
A(B\textbf{x}_i) = AB\textbf{x}_i = BA \textbf{x}_i = B(A\textbf{x}_i)\\
=B(\lambda \textbf{x}_i) = \lambda(B\textbf{x}_i)
$$
so $B\textbf{x}_i$ is also an eigenvector of $A$ with eigenvalue $\lambda$ or else is $0$. Or in other words $B$ takes eigenvectors of $A$ to new eigenvectors of $A$ with same eigenvalue or at worst to members of that eigenspace (the zero vector is in the eigenspace but is not considered an eigenvector). This is where the commutativity of $A$ and $B$ is invoked.
Hence each $B\textbf{x}_i$ is a linear combination of the eigenbasis vectors $\{\textbf{x}_j\}$, and may be therefore expressed as an $n$-component vector in terms of the basis $\left\{\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right\}$, so there exists an $m\times m$ matrix $P$ such that
$$B\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)=\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)P.$$
Suppose that $P$ has an eigenvalue $\mu$ (possibly complex) and also an eigenvector $\textbf{y}$. That is $P\textbf{y}=\textbf{y}\mu$. Right-multiply the vector $y$ on each side of the previous formula, and we get that $$B\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)y=\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)Py=\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)y\mu.$$ where the $n$-dimensional vector $\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)\textbf{y}$, the linear combination of vector set $\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}$, is also an eigenvector of $B$, whose corresponding eigenvalue is $\mu$.
Moreover, $\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)\textbf{y}$ is also the eigenvector of matrix $A$, whose corresponding eigenvalue is $\lambda$, since
$$A\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)\textbf{y}=\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)\textbf{y}\lambda.$$
Therefore $\left(\textbf{x}_{1},\textbf{x}_{2},\cdots,\textbf{x}_{m}\right)\textbf{y}$ is an eigenvector of both $A$ and $B$.