I am fairly new to stochastic processes, but I have been interested in studying telegraph processes to model attachment and detachment dynamics (between cells, in a biological context). However, I would like to understand a specific detail on these processes.
In the properties of a telegraph process, it is written that "knowledge of an initial state decays exponentially", and so, for a time $t\gg (\lambda_1+\lambda_2)^{-1}$, the stationary values have mean $$\tag{*} \langle X\rangle_s =\frac{\lambda_2}{\lambda_1+\lambda_2} $$ where we took $c_1=1$ and $c_2=0$ (in the Wikipedia definition). Does this imply that, if I am simulating a discretisation of such a process, as long as my time increment $\Delta t$ satisfies $\Delta t(\lambda_1+\lambda_2)\gg 1$, I can approximate this process by a Bernoulli process with probability given by (*)? In other words, if the timescale at which we make the observation is longer than the inverse of these rates, can we expect the process to be memoryless every time we observe, i.e. a Bernoulli process?
If that is the case, do you know of any reference for this? Apologies if this is standard knowledge, but I would like to understand this limit better.