I am interested in computing the density of a stopping time $\tau^b = \inf\{ t>0 : X_t^b \leq 0 \}$ for some process $X^b = (X_t^b)_{t \geq 0}$ with initial condition $X_0^b =x_0 > 0$. More precisely, for $t> 0$, I would like to know an expression for:
$$\mathbb{P} \{ \tau^b \leq t\}$$
The process $X^b$ is defined below. We let $\chi$ be the indicator function and $W=(W_t)_{t \geq 0}$ a classical Brownian Motion. Here $b, \sigma >0$ and $\mu_1, \mu_2 \in \mathbb{R}$.
\begin{align} dX_t^b = \left[\mu_1 + (\mu_2 - \mu_1) \chi_{\{X_t^b > b\}}\right] dt + \sigma dW_t \quad &\text{for $t \in [0,\tau^b)$}\\ dX_t^b = 0 \quad & \text{for $t \geq \tau^b$} \end{align}
Observe that this is just a Brownian Motion that changes its drift depending on whether $X^b$ surpassed the threshold $b$ or not. To compute it, I am trying to use the Markov property of the process and maybe some partitions. For instance, for $r \in (0,t)$ define the events:
\begin{gather} A_r^- := \{ X^b_r=b, X^b_s<b \text{ for s $\in (r,t)$}\} \\ A_r^+ := \{ X^b_r=b, X^b_s>b \text{ for s $\in (r,t)$}\} \\ A_0^- := \{ X^b_s<b \text{ for s $\in (0,t)$}\} \\ A_0^+ := \{ X^b_s>b \text{ for s $\in (0,t)$}\} \end{gather}
Then $A_k^l \cap A_j^m = \emptyset$ for $l,m \in \{-,+\}, k \neq j$ and $\cup_r \{ A_r^+ \cup A_r^- \} = \Omega$. Further, conditioned on these events $X^b$ doesn't change its dynamics (from moment $r$ onwards). Perhaps using the MP the computation $\mathbb{P} \{ \tau^b \leq t | A_r^-\}$ is not too hard (observe $\mathbb{P} \{ \tau^b \leq t | A_r^+\} = 0$ for all $r$), but then of course we need to find out what is $\mathbb{P} \{A_r^-\}$.
I know that for "pure" Brownian motions the answer is well-known (see here for instance). The motivation for this comes from the optimal dividend problem in stochastic control. The process I am studying is a little more complex but to understand this simplified version will be helpful. Any references are also very welcomed.