5

I read in Proofs and Types by Girard et alii. the following excerpt that talks about the calculus of natural deduction:

Now a sentence at a leaf (of the deduction tree) can be dead, when it no longer plays an active part in the proof. Dead sentences are obtained by killing live ones. The typical example is the $\implies$-introduction rule:

$$ \dfrac{\stackrel{[A]}{\stackrel{\vdots}{B}}}{A \implies B} $$

It must be understood thus: starting from a deduction of B, in which we choose a certain number of occurrences of $A$ as hypotheses (the number is arbitrary: 0,1,250,$\ldots$), we form a new deduction of which the conclusion is $A \implies B$, but in which all these occurrences of $A$ have been dicharged, ie. killed. There may be other occurrences of $A$ which we have chosen not to discharge.

This rule illustrates very well the illusion of the tree-like notation: it is of critical importance to know when a hypothesis was discharged, and so it is essential to record this. But if we do this in the exmaple above, this means we have to link the crossed A with the line of the $\implies$I rule; but it is no longer a genuine tree we are considering.

May I ask you for an explanation and example of this situation?

user1868607
  • 2,224
  • 14
  • 23

2 Answers2

5

Every assumption that is introduced between brackets has to be discharged at some point for the proof to be complete. Otherwise, we could prove that $A$ is true for any given $A$ as follows.

[A]¹
───
 A

This proof is incorrect, we have not discharged the initial assumption. The following proof of $A\to A$ would be correct, as it discharges the assumption (and the implication introduction rule is necessary to do so!).

 [A]¹
 ───
  A
───── (discharging ¹)
A → A

Now, let's compare two derivations of $A \to A \times A$ and $A \to A \to A \times A$. On the first one, we will discharge the two occurrences of $A$ at the same time (on the same deduction step). On the second one, we will discharge the two occurrences on two separate steps.

First proof

  [A]¹   [A]²        
  ─────────   
    A × A        
───────────── (discharging ¹ and ²)
  A → A × A   

Second proof

 [A]¹    [A]²             
 ──────────          
   A × A             
───────────── (discharging ¹, note that ² is still alive!)   
  A → A × A        
──────────────── (discharging ²)
 A → A → A × A

Maybe looking at this from the perspective of the simply-typed lambda calculus is more illuminating. Instead of tracking superscripts, variable names represent which assumptions are we choosing to discard at each point. Note the difference between the two proofs, and how the introduction rule of the implication corresponds to a lambda abstraction, while the introduction rule of the conjunction corresponds to a pair type.

First proof

   a : A   a : A        
   ─────────────     
   (a,a) : A × A        
 ────────────────────
 λa.(a,a) : A → A × A

Second proof

     a : A   b : A             
     ─────────────          
     (a,b) : A × A             
   ────────────────────     
   λb.(a,b) : A → A × A        
 ───────────────────────────
 λa.λb.(a,b) : A → A → A × A 

(I am using a lambda interpreter I wrote to generate these derivation trees. It may be useful for you if you are interested in the connection between propositional intuitionistic logic and lambda calculus.)

Mario
  • 280
  • 1
  • 9
3

I recommend Mario Román's answer, particularly the connection to the lambda calculus, but I'll add a third perspective.

I dislike the natural deduction notation, but I do like the natural deduction rules of inference. Instead, I prefer a notation more like the one used in the sequent calculus, and nothing stops us from using that notation with the rules of natural deduction.

With a sequent-like notation, instead of using brackets and ellipses to indicate hypothetical, we simply keep track of the hypotheses at each step. For example, the implication introduction rule looks like: $$\dfrac{A\vdash B}{\vdash A\implies B}$$ where the propositions to the left of the turnstile ($\vdash$) are the hypotheses for the propositions to the right. More likely, we'd also indicate that there can be other hypotheses by including a (meta)variable for a list of additional propositions traditionally notated with $\Gamma$. That is, we'd write implication introduction as: $$\dfrac{\Gamma,A\vdash B}{\Gamma\vdash A\implies B}$$ In my opinion, it is much easier to understand (and clearly express) the rules of logic using this sequent-like notation (particularly once quantifiers are involved). It does produce a lot of duplication of the hypotheses that the natural deduction notation avoids, but, in practice, you aren't actually going to be making these derivations that often, but instead are going to be looking at them to better understand logic. (When you really do want to make these types of derivations, there's tooling nowadays that can easily handle this duplication.)

Mario Román's derivations using this notation look like: $$\dfrac{\dfrac{}{A\vdash A}\qquad\dfrac{}{A\vdash A}}{\dfrac{A,A\vdash A\land A}{\dfrac{A\vdash A\land A}{\vdash A\implies A\land A}}}\qquad\text{ and }\qquad\dfrac{\dfrac{}{A\vdash A}\qquad\dfrac{}{A\vdash A}}{\dfrac{A,A\vdash A\land A}{\dfrac{A\vdash A\implies A\land A}{\vdash A\implies (A\implies A\land A)}}}$$ where the extra step on the left is a use of contraction. Type theoretically, this looks like: $$\dfrac{\dfrac{}{x:A\vdash x:A}\qquad\dfrac{}{x:A\vdash x:A}}{\dfrac{x:A,x:A\vdash (x,x):A\times A}{\dfrac{x:A\vdash (x,x):A\times A}{\vdash \lambda x:A.(x,x):A\to A\land A}}}\ \text{and}\ \dfrac{\dfrac{}{x:A\vdash x:A}\qquad\dfrac{}{y:A\vdash y:A}}{\dfrac{x:A,y:A\vdash (x,y):A\times A}{\dfrac{x:A\vdash \lambda y:A.(x,y):A\to A\times A}{\vdash \lambda x:A.\lambda y:A.(x,y):A\to(A\to A\times A)}}}$$

Derek Elkins left SE
  • 12,179
  • 1
  • 30
  • 43