$\def\glz{\operatorname{GL}_4(\mathbb Z)}$
$\def\inv{^{-1}}$
The question is about filling in some details about assertions made in this paper. The answer got quite long as I added more details. Guide: the part of the answer through the first edit shows that the commutant of the copy of $D_5$ in $\operatorname{GL}_4(\mathbb C)$ is two dimensional, abelian, and generated as a unital algebra by a certain matrix $M = R + R\inv$. However, the copy of $D_5$ and the matrix $M$ both lie in $\glz$, and the subsequent part of the argument in the 4th postlude shows that $M$ and $- 1$ (minus the identity matrix) generate the centralizer of $D_5$ in $\glz$ as a group. In between, I added more detail about the first part, and also explained the construction of the 4 dimensional representation of $D_5$ given in the paper.
I'm going to write $R$ and $J$ for your two matrix generators. $R$ is diagonalizable with eigenvectors the 4 primitive 5th roots of 1, call them $\omega, \omega^{-1}, \omega^2, \omega^{-2}$. In the basis of eigenvectors, matrices commuting with your $D_5$ in particular commute with $R$, so are diagonal. Also in the basis of eigenvectors, $J$, up to some scaling which is probably irrelevant, permutes the eigenvectors in pairs (belonging to inverse pairs of eigenvalues). Thus your diagonal commuting matrices have only two distinct diagonal entries. Thus the commutant is commutative, generated by two commuting idempotents of rank 2.
Now, still in the basis of eigenvectors, $M = R + R^{-1}$ is a diagonal matrix with just two eigenvalues $\omega + \omega^{-1}$ and $\omega^2 + \omega^{-2}$ corresponding to the range of the two desired idempotents. So in fact, the whole commutant is generated by polynomials in $M$.
Note that this is where $\mathcal R(\delta)$ came from, it is $R + R^{-1}$, in the original basis, not the basis of eigenvectors.
I've considered the problem from the point of view of the commuting algebra, but the centralizer subgroup just consists of the invertible elements in the commuting algebra. I've also worked over the complex numbers. Anyway, I think this is close to what you need.
Edit: Looking at this from more of a group representations point of view, we can get to the same place more conceptually, perhaps. Clearly $R + R^{-1}$ is in the commutant for any representation. So $R + R^{-1}$ acts as a scalar in each irreducible representation. In a 2 dimensional irreducible representation, the value of that scalar is the character of $R$, which is $2 \cos(2 \pi/k)$, where $k$ can be taken as a label for the representation. In either of the two one dimensional representations, $R + R^{-1}$ acts as the scalar $2$.
Now in our particular representation, by examining the spectrum of $R + R^{-1}$ or of $R$, you can see that both 2 dimensional irreducible representations occur with multiplicity 1, so it follows that $R + R^{-1}$ actually generates the commutant (as an algebra).
2nd edit: You asked how to use polynomials in $R + R^{-1}$ to generate the whole commutant. In fact linear polynomials will do, because of the special structure here, as I will next explain.
Mini-mini-course in representation theory: Suppose I have a group representation $\rho : G \to \operatorname{GL}(V)$, and it decomposes as a direct sum of mutually inequivalent irreducible subrepresentations, $V = W_1 \oplus W_2 \oplus \cdots \oplus W_n$. Let $P_j$ be the projection operator which is $1$ on $W_j$ and zero on $W_i$ for $i \ne j$. Thus $P_j^2 = P_j$, $P_i P_j = 0 $ for $i \ne j$ and $\sum_j P_j = 1$,
where I write $1$ for the identity operator. Then the entire commutant $\mathcal C$ of $\rho(G)$ consists of linear combinations of $P_1,\dots, P_n$.
Mini-mini-course in linear algebra Now suppose I have a collection of projection operators as above $P_1,\dots, P_n$ with
$P_j^2 = P_j$, $P_i P_j = 0 $ for $i \ne j$ and $\sum_j P_j = 1$. No matter if they came from a group representation or from somewhere else.
Let $W_j = P_j V$, so $V$ is the direct sum of the subspaces $W_j$. Let $\mathcal C$ be the algebra generated by the projection operators $P_j$; in fact, it consists of linear combinations of the operators $P_j$. Suppose someone hands me a particular linear combination $T= \sum_i \alpha_i P_i$ with all the $\alpha_i$ distinct. Then I claim that I can recover the $P_j$ and hence all of $\mathcal C$ from polynomials in $T$, and if $n = 2$ I can do it with linear polynomials.
This goes by the name interpolation formulas.
Fix $i$ and consider
$$\prod_{j \ne i} \frac{ T - \alpha_j 1}{\alpha_i - \alpha_j},$$
which is a polynomial of degree $n-1$ in $T$. You can check that applied to a vector in $W_k$ for $k \ne i$, it gives zero, but applied to a vector in $W_i$ it gives back the vector. Thus the polynomial in $T$ is actually equal to $P_i$.
3rd postlude I see from your discussion/chat that you are still puzzled about the derivation of the 4d representation matrices. As "explained" in the physics paper, the representation was derived from the 5d permutation representation exactly as Josh B. suggested. First of all, you have to know that given a representation $V$ and a subrepresentation $W$, you get a quotient representation on $V/W$. Start with the permutation representation of $D_5$ on $F^5$, $F$ your favorite field, given by
$$
R: e_0 \to e_1 \to e_2 \to e_3 \to e_4 \to e_0
$$
where $e_i$ are the standard basis elements of $F^5$ (with shifted labels)
and
$$
J: e_0 \to e_0,\quad e_1 \to e_4 \to e_1, \quad e_2 \to e_3 \to e_2.
$$
Now $w = \sum_{j = 0}^4 e_j$ is fixed by both $R$ and $J$. In fact, it's fixed by all permutation matrices. Let $W = F w$ and $V = F^5/W$.
Give $V$ the basis $\overline e_1 = e_1 + W, \dots, \overline e_4 = e_4 + W$. Note $e_0 + W = -(\overline e_1 + \overline e_2 + \overline e_3 + \overline e_4)$, since $e_0 = -\sum_{j = 1}^4 e_j + w$. It follows that the matrices of $R$ and $J$ in $V = F^5/W$ with respect to the basis
$\{\overline e_i\}_{1 \le i \le 4}$ are exactly those that you quoted from the paper.
4th postlude: I finished it off.
Demonstration that the centralizer of $D_5$ in $\glz$ is generated (as a group) by
$M = R + R\inv$ and $- 1$. Recall that $\glz$ is the set of integer matrices with determinant $\pm 1$.
We already know that any matrix that commutes with $D_5$ pointwise is a linear combination of $M$ and the identity matrix. We need to find out when $a M + b 1$
has integer entries and determinant $\pm 1$. By inspection,
$$
a M + b 1 =\left(
\begin{array}{cccc}
b-a & a & 0 & -a \\
0 & b & a & -a \\
-a & a & b & 0 \\
-a & 0 & a & b-a \\
\end{array}
\right)
$$
is integer valued if and only if $a$ and $b$ are integers. By computation,
$$
\det(a M + b 1) = \left(a^2+a b-b^2\right)^2,
$$
and in particular $\det M = 1$. Moreover, $\det(a M + b 1) \ge 0$, so $$a M + b 1 \in \glz \implies \det(a M + b 1) = 1.$$
The minimal polynomial for $M$ is
$x^2 + x -1$, as follows for example from the computation of the eigenvalues of $M$. Thus we have
$$
M\inv = (M + 1) \quad \text{and} \quad M^2 = (1 - M).
$$
Observation 1. Suppose $a M + b 1 \in \glz$ then $a$ and $b$ are relatively prime.
Proof. If $a$ and $b$ have a common prime factor $p$, then $\det(a M + b 1)$ is divisible by $p^4$. qed
Suppose that $T = a M + b 1 \in \glz$. We want to show that $T$ is in the subgroup of $\glz$ generated by $M$ and $ - 1$. If one of the coefficients $a, b$ is zero, then $T = \pm M$ or $T = \pm 1$, and we are done, If both of the coefficients $a, b$ are $\pm 1$, then $T = \pm M\inv$ or $T = \pm M^2$, by the discussion of the minimal polynomial, so again we are done. Now we can proceed by induction on $|a| + |b|$.
Let us consider the case that $a b > 0$. By extracting a factor of $ -1$ (i.e. the matrix $-1$), we can assume $a, b > 0$, and relatively prime. Moreover, we are only interested in the case that $ a b > 1$. In particular neither of $a, b$ is divisible by the other.
Observation 2. $ b > a > 0$.
Proof. If $a >b$ then $$a ^2 + ab - b^2 > a^2 - b^2 = (a -b)(a + b) > a + b > a > 1.$$ Hence, $\det(T) >1$, a contradiction. qed
Now we take our element $T = a M + b 1 \in \glz$ and factor out $M\inv$:
$$
T = M\inv (a M^2 + b M) = M\inv ( a(1 - M) + b M) = M\inv ( (b - a) M + a 1).
$$
The factor $ (b - a) M + a 1$ is necessarily in $\glz$, and has positive coefficients whose sum $b$ is less than the sum of the coefficients of $T$. So our conclusion follows from the induction hypothesis.
Next consider the case that $a b < 0$. By extracting a factor of $-1$ if necessary, we can assume
$a > 0 > b$, and $|a b| > 1$.
Observation 3. $ a > -b$.
Proof. if $-b > a$, then
$$
b^2 - ab - a^2 > b^2 - a^2 = (b-a)(b+a) > 1.
$$
Hence $\det(T) > 1$, a contradiction. qed
We take our element $T = a M + b 1 \in \glz$ and factor out $M$:
$$
T = M( a 1 + b M\inv) = M ( a 1 + b (M + 1)) = M ( b M + (a + b) 1).
$$
The factor $b M + (a + b) 1$ is in $\glz$ and has coefficients $a + b > 0$ and $b <0$. The sum of absolute values of these coefficients is $(a + b) - b = a$, which is less than the corresponding sum for the coefficients of $T$, namely $a - b$. Again our conclusion follows from the induction hypothesis.