Your ODE is $$y'=xe^{2x}-4y$$ with the form of the exact solution $$y(x)=(Ax+B)e^{2x}+Ce^{-4x}.$$ This inserted gives the coefficient conditions
$$
y'+4y=(A+6(Ax+B))e^{2x}\implies A=\frac16, B=-\frac1{36}.
$$
For the Taylor method we use the derivatives of the ODE to compute the further derivatives of $y$. From the extended Leibniz product rule $$(xf(x))^{(k)}=f^{(k)}(x)+kf^{(k-1)}(x)$$ we get in general $$y^{(k)}=2^{k-1}(2x+k)e^{2x}-4y^{(k-1)}$$ which for up to 4th order are
\begin{align}
y''&=(2x+1)e^{2x}-4y'\\
y'''&=(4x+4)e^{2x}-4y''\\
y^{(4)}&=(8x+12)e^{2x}-4y'''
\end{align}
This we then use to compute the Taylor step (in python, needs to import exp)
def Taylor(y0,N):
h = 2.0/N
x, y = 0, y0
for k in range(N):
y1 = x*exp(2*x)-4*y
y2 = (2*x+1)*exp(2*x)-4*y1
y3 = (4*x+4)*exp(2*x)-4*y2
y4 = (8*x+12)*exp(2*x)-4*y3
x, y = x+h, y+h*y1+h**2/2*y2+h**3/6*y3+h**4/24*y4
return y
Add some code to test this method
def y_exact(x,y0): return ((6*x-1)*exp(2*x) + (36*y0+1)*exp(-4*x))/36
y0 = 1
y2 = y_exact(2,y0)
for N in [5, 10, 100, 500, 750, 1000]:
yT1 = Taylor(y0,N);
yT2 = Taylor(y0,2*N);
print N, yT1, yT1-y2
print 2*N, yT2, yT2-y2, (yT1-y2)/(yT2-y2)
This yields the following values, errors and ratios
N y_Taylor(2) error ratio
5 16.626489487 -0.056623359684
10 16.6798692947 -0.00324355204858 17.4572070483
10 16.6798692947 -0.00324355204858
20 16.6829256052 -0.000187241566536 17.3228205071
100 16.6831125714 -2.75283085216e-07
200 16.6831128297 -1.70001754896e-08 16.1929555013
500 16.6831128463 -4.31953139923e-10
1000 16.6831128467 -2.68691735528e-11 16.0761602539
750 16.6831128466 -8.58904058987e-11
1500 16.6831128467 -3.74456021746e-12 22.9373814042
1000 16.6831128467 -2.68691735528e-11
2000 16.6831128467 -5.60618218515e-12 4.7927756654
This table confirms the general situation. There is a working range of $h$ where double the step size leads to a 16fold error. With $h$ above that working range the non-linearities of the ODE make the second and further terms of the error expansion noticeable, perturbing the error ratio 16. Below that working range the accumulation of the floating point noise over the larger number of steps becomes equal to or larger than the size of the method error, which makes the error ratio over a step size doubling a random number.
In summary, I do not see the problem you report. This is either due to a radically different initial condition or some error in implementing the Taylor method.