Sort of a spiritual successor to Accurate floating-point linear interpolation.
Using $\oplus$, $\ominus$, and $\otimes$ to represent IEEE-754 addition, subtraction, and multiplication respectively, the previous question's method (2) for linear interpolation is
$$ lerp(t, v_0, v_1) = ((1 \ominus t) \otimes v_0) \oplus (t \otimes v_1)\text{.} $$
Unfortunately, while this definition provides the nice property that $lerp(0, v_0, v_1) = v_0$ and $lerp(1, v_0, v_1) = v_1$, this definition is not monotonic on the interval $t \in [0, 1]$ nor even bounded to $[v_0, v_1]$.
So, I'm asking: can we put a reasonable bound on how much this definition deviates from the perfectly rounded result? I only really care about ULP variance where $t \in [0, 1]$, but extending a proof to other values of $t$ would also be interesting. You can assume that $t$, $v_0$, and $v_1$ are all finite.
I've found a counterexample showing that $lerp$ is in fact not bounded for 32 bit floating point:
$$ lerp(\texttt{1.000002p-19}, \texttt{0x1.09F76Cp1B}, \texttt{0x1.782D4Ap1B}) = \texttt{0x1.09F76Ap1B} \\ lerp(2.9802326 \times 10^{-8}, 139443040, 197225040) = 139443020 $$
This counterexample is one ULP off.