Because of the deferred measurement principle, any circuit where measurement amplified errors could be translated into a circuit where unitary operations amplified errors. Therefore measurements can't amplify errors.
That said, often we prefer digitized (measured) errors to small coherent errors. Repeated coherent errors can grow in seriousness quadratically faster than incoherent errors due to constructive interference. You can use Pauli twirling to turn any small angle error into an error equivalent to there being a small probability of Pauli errors occurring. In a stabilizer circuit, you can force this to happen by replacing every circuit layer $C$ with a random layer of Pauli gates $P$, then $C$, then undoing the Pauli gates with the layer $P_2 = C P C^\dagger$. Then fold adjacent single qubit rotations into individual single qubit rotations. This forces all noise to be incoherent very quickly. In the context of an error correcting code like the surface code, that doesn't require just-in-time corrections, the accumulation of tracked errors will serve this same purpose.
I guess the irony here is that this question is exactly opposite to the actual worries you'd have in experiments. In experiments, it's the coherent noise that's bad and the incoherent noise that's good. Coherent noise is scary because it can combine with itself to grow faster. And any coherent noise you have you should probably be just be cancelling out anyways; because coherent basically just means "consistently predictable". Like, if you're consistently over-rotating by 1 degree, then decrease the amount you rotate by 1 degree to compensate. That consistent overrotation (coherent noise) will then probably become inconsistent over-and-under-rotation-from-jitter (incoherent noise). Basically all QEC analysis is done by assuming noise has been made incoherent, e.g. by error digitization from the code.