It is hard to give a comprehensive answer, but the Metamath Proof Explorer(MPE) has some relevant things to say as to why they chose to go with a Hilbert-style system in terms of its strengths and lack of downsides. I will quote them at length so that this answer is more self-contained.
The appendix on Traditional Textbook Axioms of Predicate Calculus explains:
Both our system and the traditional system are called Hilbert-style systems. Two other approaches, called natural deduction and Gentzen-style systems, are closely related to each other and embed the deduction (meta)theorem into its axiom system.
The "Weak Deduction Theorem" section of the page on the "Weak Deduction Theorem for Classical Logic explains why they do not (directly) use the standard deduction theorem from natural deduction/Gentzen-style systems:
One of the goals of Metamath is to let you plainly see, with as few underlying concepts as possible, how mathematics can be derived directly from the axioms, and not indirectly according to some hidden rules buried inside a program or understood only by logicians. If we added the Standard Deduction Theorem to the language and proof verifier, that would greatly complicate both and largely defeat Metamath's goal of simplicity. In principle, we could show direct proofs by expanding out the proof steps generated by the algorithm of the Standard Deduction Theorem, but it is not feasible in practice because the number of proof steps quickly becomes huge, even astronomical. Since the algorithm is driven by proof of the deduction, we would have to go through that proof all over again—starting from axioms—in order to obtain the theorem form. In terms of proof length, there would be no savings over just proving the theorem directly instead of first proving the deduction form.
It is notable that the ease of constructing proofs in natural deduction can be recovered essentially completely in a Hilbert-style system like MPE's. Basically, just write the hypotheses and conclusion in "deduction form" where there is an extra "$P\to$" in front of everything (a trick that may have been first pointed out by Mario Carneiro). This allows you to use Hilbert-style translations of the rules of natural deduction, such as the ones listed in MPE's "natded" list of translations. For a lot more commentary on this approach, see MPE's page "Deduction Form and Natural Deduction". Most of that page might be helpful, but I want to highlight a bit from the section "Strengths of the current approach":
As far as we know there is nothing in the literature like either the weak deduction theorem or Mario Carneiro's natural deduction method (Mario Carneiro's method is presented in "Natural Deductions in the Metamath Proof Language" by Mario Carneiro, 2014). In order to transform a hypothesis into an antecedent, the literature's standard "Deduction Theorem" requires metalogic outside of the notions provided by the axiom system. We instead generally prefer to use Mario Carneiro's natural deduction method, then use the weak deduction theorem in cases where that is difficult to apply, and only then use the full standard deduction theorem as a last resort.