The main limitation is that the chances that there is a signature corresponding to a random hash value quickly becomes vanishingly small.
In these code-based schemes, we hash a message to a vector the vector space of which the error-correcting code is a subspace. He hope that this vector is a light garble of a codeword. If this is the case, then we can use our trapdoor to discover the nearby codeword and publish this as a signature. If it does not, we rehash and try again. However, as the schemes grow, the proportion of vectors that are light garbles of codewords decreases and the number of rehashes necessary grows. In the wikipedia example where a code capable of correcting 9 errors is used (the security of such a scheme is very poor), the chance of a vector being a light garble is approximated as $1/9!$. This means that on average we'd have to perform several hundred thousand rehashes before getting a legitimate signature. If we use more secure parameterisations, perhaps analogous to the Classic McEliece scheme where a code capable of correcting at least 64 errors is used, the number of rehashes becomes completely infeasible.
Note that rejection and rehashing is also required in the ML-DSA standard, but in this case we only expect to require 3-6 rehashes.
Note also that large prime DSA is deprecated for the production of new signatures as of version 5 of FIPS 186.