The minimization of a given primitive recursive function $f$ is computed by the following expression:
$ \newcommand{\pr}[2]{\text{pr}^{#1}_{#2}} \newcommand{\gpr}{\text{Pr}} \newcommand{\sig}{\text{sgn}} \text{Mn}[f] = \gpr[g, h] \circ (\pr{1}{1}, \pr{1}{1}) $
$g = \sig \circ f \circ (\pr{1}{1}, c^1_0)$
$h = \text{Cond}[\text{t}_\leqslant \circ (\pr{3}{3}, \pr{3}{2}), \pr{3}{3}, \text{Cond}[ \text{add} \circ (\text{t}_= \circ (\pr{3}{3}, \text{ suc} \circ \pr{3}{2}), f \circ (\pr{3}{1}, \text{suc} \circ \pr{3}{2})), \text{suc} \circ \pr{3}{2}, \text{suc} \circ \text{suc} \circ \pr{3}{2} ]]$
The first call to the function ($\text{Mn}[f] \circ (\pr{1}{1}, \pr{1}{1}) (n, k)$) will return at most k+1 and terminate for all inputs, if $f$ is total recursive.
A $\mu$-recursive search would not necessarily terminate, if no $z \leqslant k$ such that $f(n, z) = 0$. But I fail to see how that $\mu$-recursive expression would look in contrast to the above one, meaning $-$ what would the part of the expression look like, that makes the function not terminate.