3

What are the pros and cons of natural deduction relative to Hilbert-style systems?

From Wikipedia, I get the impression that natural deduction proofs tend to be shorter and closer to how humans do it. If so, why do some proof-checkers, like Metamath, use Hilbert-style systems?

MWB
  • 573
  • ND is much more simple to use. – Mauro ALLEGRANZA Nov 07 '20 at 08:27
  • 2
    In Hilbert-style systems it is a pain to find proofs, but very easy to check proofs because there is only one inference rule to consider, which makes them suitable for when you want to algorithmically verify proofs prove something about a firmal theory, e.g. what theorems Peano arithmetic can and can not prove. – Natalie Clarius Nov 07 '20 at 10:55
  • 1
    Glib answer: proof-checkers aren't humans. More illuminating answer: see lemontree's comment – Bram28 Nov 07 '20 at 14:09
  • @lemontree Are you speaking from experience with some specific proof checkers? – MWB Nov 08 '20 at 06:10
  • @MauroALLEGRANZA ditto. – MWB Nov 08 '20 at 06:11
  • @MaxB No, I've done a bit of experimenting with Coq but didn't get into the details of how it works behind the scenes. – Natalie Clarius Nov 08 '20 at 07:50

2 Answers2

4

Indeed natural deduction (and in particular Fitch-style) is simpler for humans to use. That does not imply that proof assistants would use what is simpler for humans... In fact, Coq is widely known as a "write-only system", meaning that you can write Coq proofs but (typically) cannot read them. In any case, you simply have to actually sit down and look at a proof of a theorem of some complexity (say of Zorn's lemma) from the axioms alone (say ZFC), in various formal system styles, to see that pure Hilbert-style proofs are much more difficult to read and understand than Fitch-style proofs or tree-style proofs (as in Gentzen's natural deduction system) or sequent-style proofs (as in Gentzen's LK or similar). The advantage of Fitch-style over tree-style or sequent-style is that it is very similar to programming style, and can easily be represented in indented text format, whereas representing tree-style or sequent-style in text format tends to make it become like Fitch-style in the end.

As you should already know, the advantage of Hilbert-style is that there is only one inference rule, so of course it is (slightly) easier to write a proof checker for Hilbert-style than for Fitch-style. And of course theorems about FOL are somewhat easier if proofs are defined as Hilbert-style proofs. It is nevertheless easy to translate any Fitch-style system into a Hilbert-style system, so that is not a big deal actually.

user21820
  • 60,745
  • I'm not so sure Fitch style is indeed simpler for humans than tree-style, at least when it comes to seeing the structure of an argumentation or insights about the proof system itself such as normalization; I've always found the tree presentation much clearer for that. But the linear Fitch presentation is certainly closer to how people typically write proofs informally or in a proof checker, and so perhaps easier to get acquainted with for those who already have practice in proof writing in other areas of mathematics. – Natalie Clarius Nov 07 '20 at 15:06
  • @lemontree: Well, what I wrote in my post was not intended to be contrary to your comment. I said that Fitch-style and tree-style and sequent-style are easier to understand than Hilbert-style, and the only advantage I mentioned for Fitch-style over tree-style is that it is like programming-style and can be easily represented in text format. =) – user21820 Nov 07 '20 at 16:14
  • Thanks. Just to clarify: are you talking about some specific proof checkers you worked with? – MWB Nov 07 '20 at 17:51
  • @MaxB: I have taken a cursory look at a variety of proof assistants, even though I was too lazy to dirty my hands on any of them because I did not want to have to spend time figuring out how to set it all up... But I have done enough formal proofs to know essentially what must arise in differing proof styles. – user21820 Nov 07 '20 at 18:06
2

It is hard to give a comprehensive answer, but the Metamath Proof Explorer(MPE) has some relevant things to say as to why they chose to go with a Hilbert-style system in terms of its strengths and lack of downsides. I will quote them at length so that this answer is more self-contained.

The appendix on Traditional Textbook Axioms of Predicate Calculus explains:

Both our system and the traditional system are called Hilbert-style systems. Two other approaches, called natural deduction and Gentzen-style systems, are closely related to each other and embed the deduction (meta)theorem into its axiom system.

The "Weak Deduction Theorem" section of the page on the "Weak Deduction Theorem for Classical Logic explains why they do not (directly) use the standard deduction theorem from natural deduction/Gentzen-style systems:

One of the goals of Metamath is to let you plainly see, with as few underlying concepts as possible, how mathematics can be derived directly from the axioms, and not indirectly according to some hidden rules buried inside a program or understood only by logicians. If we added the Standard Deduction Theorem to the language and proof verifier, that would greatly complicate both and largely defeat Metamath's goal of simplicity. In principle, we could show direct proofs by expanding out the proof steps generated by the algorithm of the Standard Deduction Theorem, but it is not feasible in practice because the number of proof steps quickly becomes huge, even astronomical. Since the algorithm is driven by proof of the deduction, we would have to go through that proof all over again—starting from axioms—in order to obtain the theorem form. In terms of proof length, there would be no savings over just proving the theorem directly instead of first proving the deduction form.

It is notable that the ease of constructing proofs in natural deduction can be recovered essentially completely in a Hilbert-style system like MPE's. Basically, just write the hypotheses and conclusion in "deduction form" where there is an extra "$P\to$" in front of everything (a trick that may have been first pointed out by Mario Carneiro). This allows you to use Hilbert-style translations of the rules of natural deduction, such as the ones listed in MPE's "natded" list of translations. For a lot more commentary on this approach, see MPE's page "Deduction Form and Natural Deduction". Most of that page might be helpful, but I want to highlight a bit from the section "Strengths of the current approach":

As far as we know there is nothing in the literature like either the weak deduction theorem or Mario Carneiro's natural deduction method (Mario Carneiro's method is presented in "Natural Deductions in the Metamath Proof Language" by Mario Carneiro, 2014). In order to transform a hypothesis into an antecedent, the literature's standard "Deduction Theorem" requires metalogic outside of the notions provided by the axiom system. We instead generally prefer to use Mario Carneiro's natural deduction method, then use the weak deduction theorem in cases where that is difficult to apply, and only then use the full standard deduction theorem as a last resort.

Mark S.
  • 25,893
  • 1
    Thanks for the answer. The ND page says: The deduction theorem (including the weak deduction theorem) isn't used routinely in the Metamath Proof Explorer (set.mm) today, since the alternatives are usually better. If the deduction theorem is Metamath's way of doing ND, doesn't the quoted statement contradict people claiming ND is much easier to use? – MWB Nov 07 '20 at 22:57
  • 2
    The deduction theorem is not MPE's way of doing ND. The (strong) deduction theorem is a part of standard ND that MPE is intentionally trying to avoid. But they can get most (all?) of the ease-of-use of ND from the translations of the ND rules on the "natded" page. – Mark S. Nov 07 '20 at 23:35