Use Curry-Howard. Instead of explaining what it is, I'll just apply it and show (and explain) how it works, while doing so.
In here, and the following, I will use $⊃$ for the conditional operator instead of $→$.
Let $W$ be a proof of $(a ⊃ a ⊃ b) ⊃ a ⊃ b$. We will write this in the following way:
$$W: (a ⊃ a ⊃ b) ⊃ a ⊃ b,$$
and do so, similarly, for other proofs. We adopt the convention that $⊃$ brackets to the right, e.g.
$$c ⊃ d ⊃ e = c ⊃ (d ⊃ e).$$
Given proofs $g: c ⊃ d$ and $h: c$, we will write $gh: d$ as the proof obtained by modus ponens. Here, we adopt the convention that bracketing occurs to the left, e.g.
$$ghk = (gh)k.$$
Thus, given $f: a ⊃ a ⊃ b$, we have $Wf: a ⊃ b$. Given $x: a$, then we have $Wfx: b$. But $fx: a ⊃ b$ and $fxx: b$. Therefore, we equate $Wfx = fxx$.
One of the Hilbert axioms is $K: a ⊃ b ⊃ a$. Suppose $x: a$ and $y: b$. Then $Kx: b ⊃ a$ and, $Kxy: a$. Similarly, we set the two proofs of $a$ equal: $Kxy = x$.
Another of the Hilbert axioms is $S: (a ⊃ b ⊃ c) ⊃ (a ⊃ b) ⊃ a ⊃ c$. Letting $f: a ⊃ b ⊃ c$, then we have $Sf: (a ⊃ b) ⊃ a ⊃ c$. Letting, $g: a ⊃ b$, we have $Sfg: a ⊃ c$. Finally, letting $x: a$, we have $Sfgx: c$. But we also have $fx: b ⊃ c$ and $gx: b$. Therefore, $fx(gx): c$. So, we equate the two proofs: $Sfgx = fx(gx)$.
As an example, if $I = SKK$, then $Ix = SKKx = Kx(Kx) = x$. Thus, if $x: a$, then $Ix: a$ and, working backwards, we have $I: a ⊃ a$. The Hilbert "axiom" $I$ is therefore provable as $SKK$, which consists of step 1 = $S$ (an instance of axiom $S$), step 2 = $K$ (an instance of axiom $K$), step 3 $SK$ (modus ponens applied to steps 1 and 2), step 4 = $K$ (another instance of axiom $K$) and step 5 = $SKK$ (modus ponens applied to steps 3 and 4).
With this in place, we may write
$$Wxy = xyy = xy(Iy) = SxIy = Sx(KIx)y = SS(KI)xy,$$
define $W = SS(KI)$ and conclude that this $W$ is a proof of the proposition. Laid out in full:
$$\begin{array}{ll}
S:& (e ⊃ d ⊃ f) ⊃ (e ⊃ d) ⊃ e ⊃ f\\
S:& (a ⊃ c ⊃ b) ⊃ (a ⊃ c) ⊃ a ⊃ b\\
SS:& (e ⊃ d) ⊃ e ⊃ f\\
K:& d ⊃ e ⊃ d\\
I:& a ⊃ a\\
KI:& e ⊃ d\\
SS(KI):& e ⊃ f
\end{array}$$
where
$$c = a,\quad d = a ⊃ a,\quad e = a ⊃ c ⊃ b = a ⊃ a ⊃ b,\quad f = a ⊃ b$$
ensure that the modus ponens steps at $SS$, $KI$ and $SS(KI)$ match up correctly.
To prove
$$B: (b ⊃ c) ⊃ (a ⊃ b) ⊃ a ⊃ c,$$
we let $f: b ⊃ c$, $g: a ⊃ b$ and $x: a$. Then, we have $gx: b$ and $(f∘g)x = f(gx): a$, with $f ∘ g: a ⊃ c$. So, we equate $Bfg = f ∘ g$ and $Bfgx = f(gx)$.
Working backwards, we have:
$$f(gx) = Kfx(gx) = S(Kf)gx,$$
so we can also write $Bfg = f ∘ g = S(Kf)g$ and $Bf = S(Kf)$.
Working backwards, further, we have
$$S(Kf) = KSf(Kf) = S(KS)Kf.$$
Therefore, we may set $B = S(KS)K = S ∘ K$, and this serves as the desired proof. In line-by-line form, it's 7 lines: 2 $S$ axioms, 2 $K$ axioms and 3 modus ponens.
Now, this time we prove
$$C: (a ⊃ b ⊃ c) ⊃ b ⊃ a ⊃ c.$$
Start with $f: a ⊃ b ⊃ c$, $x: b$ and $y: a$. Then $fy: b ⊃ c$ and $fyx: c$, and we set $Cfxy = fyx$. Working backwards, we get
$$Cfxy = fyx = fy(Kxy) = Sf(Kx)y,$$
so we may set $Cfx = Sf(Kx)$. Compare to $Bfx = S(Kf)x$. Compare that, also, to $S(Kf)(Kg)$. In the forwards direction, we have
$$S(Kf)(Kg)x = Kfx(Kgx) = fg = K(fg)x,$$
so we could actually add in an optimization rule, here, too: $S(Kf)(Kg) = K(fg)$.
Continuing on, we have
$$Sf(Kx) = B(Sf)Kx = B(Sf)Kx,$$
so we could set $Cf = B(Sf)K = Sf ∘ K$. Finally, we have
$$B(Sf)K = BBSfK = BBSf(KKf) = S(BBS)(KK)f.$$
So, we could adopt the definition $C = S(BBS)(KK)$ and treat this as the desired proof.
If $φ(x): b$ is a proof that involves the hypothetical $x: a$, then we may "discharge" the hypothetical to obtain a proof $λx·φ(x): a ⊃ b$, where:
$$\begin{array}{ll}
λx·x &= I,\\
λx·u &= Ku,\\
λx·uv &= S(λx·u)(λx·v).
\end{array}$$
where $u$ is independent of $x$, in the second clause. To match up the last two clauses consistently, for the case where $u$ and $v$ are both independent of $x$ requires the above-mentioned optimization rule $S(Ku)(Kv) = K(uv)$. So, in effect, the second clause takes precedence over the third.
Further optimizations can be done. In particular, if $u: a$, then $S(Ku)I: a$ and $S(Ku)Ix = Kux(Ix) = ux$. So, we treat $u = S(Ku)I$. This corresponds to adding the clause
$$λx·ux = u,$$
if $u$ is independent of $x$.
Optimizations for $B$, $C$ and $W$ can also be added:
$$λx·uv = Bu(λx·v)$$
if $u$ is independent of $x$,
$$λx·uv = C(λx·u)v$$
if $v$ is independent of $x$,
$$λx·ux = W(λx·u).$$
Note that the optimization $λx·ux = x$, when $u$ is independent of $x$ then becomes $W(Ku) = u$. So, in effect: $W ∘ K = I$.
Conjunctions
Now, let's take a look at a few examples involving conjunctions.
Start with the axioms
$$A: a ⊃ b ⊃ a∧b,\quad π_0: a∧b ⊃ a,\quad π_1: a∧b ⊃ b.$$
First, if $x: a$ and $y: b$, then we define $(x,y) = Axy: a∧b$. Then, noting that $π_0(x,y): a$ and $π_1(x,y): y$, we postulate the equations
$$π_0(x,y) = x,\quad π_1(x,y) = y.$$
Second, if $z: a ∧ b$, then $π_0 z: a$ and $π_1 z: b$, therefore $(π_0 z, π_1 z): a ∧ b$. So, we also postulate that
$$(π_0z, π_1 z) = z\quad (z: a ∧ b).$$
We can then generalize $λ$ to include ordered pairs by setting
$$λ(x,y)·φ(x,y,(x,y)) = λz·φ(π_0z,π_1z,z),$$
where the optimization $(π_0z,π_1z) = z$ is built into the rule, by having all occurrences of $(x,y)$ separated out in $φ(x,y,z)$ and replaced by $z$.
Note, in particular, that
$$λ(x,y)·x = π_0,\quad λ(x,y)·y = π_1.$$
To prove $a ∧ b ⊃ b ∧ a$, let $x: a$, $y: b$, then $(y,x): b ∧ a$, and $(x,y): a ∧ b$. Therefore,
$$λ(x,y)·(y,x) = λz·(π_1z,π_0z) = λz·A(π_1z)(π_0z) = S(BAπ_1)π_0.$$
To prove
$$(c ⊃ a) ⊃ (c ⊃ b) ⊃ c ⊃ a∧b,$$
we set $f: c ⊃ a$, $g: c ⊃ b$, $x: c$. Then we have $fx: a$, $gx: b$ and $(fx,gx): a ∧ b$. Thus, we can define
$$⟨f,g⟩ = λx·(fx,gx) = λx·A(fx)(gx) = S(BAf)g: c ⊃ a ∧ b.$$
Continuing on, we have
$$λf·λg·⟨f,g⟩ = λf·λg·S(BAf)g = BS(BA): (c ⊃ a) ⊃ (c ⊃ b) ⊃ c ⊃ a∧b.$$
Other proofs of note may be similarly laid out:
$$
⋀ = λf·λx·λy·f(x,y) = C(BBB)A: (c∧a ⊃ b) ⊃ c ⊃ a ⊃ b,\\
⋁ = λg·λ(x,y)·gxy = C(BS(CBπ_0))π_1: (c ⊃ a ⊃ b) ⊃ c∧a ⊃ b,
$$
with the corresponding rules $⋀fxy = f(x,y)$ and $⋁g(x,y) = gxy$.
In particular
$$⋁(BS(BA)): (c ⊃ a)∧(c ⊃ b) ⊃ c ⊃ a∧b.$$
For the converse, we apply $B$:
$$Bπ_0: (c ⊃ a∧b) ⊃ c ⊃ a,\quad Bπ_1: (c ⊃ a∧b) ⊃ c ⊃ b,$$
therefore
$$⟨Bπ_0,Bπ_1⟩: (c ⊃ a∧b) ⊃ (c ⊃ a)∧(c ⊃ b),$$
which establishes the equivalence of $(c ⊃ a)∧(c ⊃ b)$ and $c ⊃ a∧b$.
Disjunctions
At this point, we bring in the axioms
$$O: (a ⊃ c) ⊃ (b ⊃ c) ⊃ a∨b ⊃ c,\quad σ_0: a ⊃ a∨b,\quad σ_1: b ⊃ a∨b,$$
define $[f,g] = Ofg: a∨b ⊃ c$, for $f: a ⊃ c$ and $g: b ⊃ c$, and postulate the identities
$$
[f,g] ∘ σ_0 = f,\quad [f,g] ∘ σ_1 = g,\\
[h ∘ σ_0, h ∘ σ_1] = h\quad (h: a∨b ⊃ c).
$$
An example proof would be of the proposition
$$(a∨b)∧c ⊃ a∨(b∧c).$$
We start with $x: a$, $y: b$, $z: c$. Then $σ_0x: a∨(b∧c)$, $(y,z): b∧c$, $σ_1(y,z): a∨(b∧c)$. Thus,
$$
λx·σ_0x = σ_0: a ⊃ a∨(b∧c),\quad λy·σ_1(y,z) = λy·σ_1(Ayz) = Bσ_1(CAz): b ⊃ a∨(b∧c),\\
[λx·σ_0x,λy·σ_1(y,z)] = [σ_0,Bσ_1(CAz)]: a∨b ⊃ a∨(b∧c).
$$
Let $w: a∨b$. Then $(w,z): (a∨b)∧c$ and
$$
[σ_0,Bσ_1(CAz)]w: a∨(b∧c),
λ(w,z)·[σ_0,Bσ_1(CAz)]w: (a∨b)∧c ⊃ a∨(b∧c).
$$
This works out to the following
$$\begin{align}
λ(w,z)·[σ_0,Bσ_1(CAz)]w
&= λ(w,z)·Oσ_0(Bσ_1(CAz))w\\
&= λv·Oσ_0(Bσ_1(CA(π_1v)))(π_0v)\\
&= S(B(Oσ_0)(B(Bσ_1)(B(CA)π_1)))π_0\\
&= S(Oσ_0 ∘ Bσ_1 ∘ CA ∘ π_1)π_0.
\end{align}$$
Laid out line-by-line, with the $(\_) ∘ (\_)$ lemma used, the proof has the following form:
$$\begin{array}{ll}
S:& (f ⊃ d ⊃ g) ⊃ (f ⊃ d) ⊃ f ⊃ g\\
O:& (a ⊃ g) ⊃ (b ⊃ g) ⊃ a∨b ⊃ g\\
σ₀:& a ⊃ a∨e\\
O σ₀:& (b ⊃ g) ⊃ a∨b ⊃ g\\
B:& (e ⊃ g) ⊃ (b ⊃ e) ⊃ b ⊃ g\\
σ₁:& e ⊃ a∨e\\
B σ₁:& (b ⊃ e) ⊃ b ⊃ g\\
O σ₀ ∘ B σ₁:& (b ⊃ e) ⊃ a∨b ⊃ g\\
C:& (b ⊃ c ⊃ e) ⊃ c ⊃ b ⊃ e\\
A:& b ⊃ c ⊃ b∧c\\
C A:& c ⊃ b ⊃ e\\
O σ₀ ∘ B σ₁ ∘ C A:& c ⊃ a∨b ⊃ g\\
π₁:& d∧c ⊃ c\\
O σ₀ ∘ B σ₁ ∘ C A ∘ π₁:& d∧c ⊃ a∨b ⊃ g\\
S (O σ₀ ∘ B σ₁ ∘ C A ∘ π₁):& (f ⊃ d) ⊃ f ⊃ g\\
π₀:& d∧c ⊃ d\\
S (O σ₀ ∘ B σ₁ ∘ C A ∘ π₁) π₀:& f ⊃ g = (a∨b)∧c ⊃ a∨(b∧c)
\end{array}$$
where
$$d = a∨b = a∨b,\quad e = b∧c = b∧c,\quad f = d∧c = (a∨b)∧c,\quad g = a∨e = a∨(b∧c)$$
ensures the matching for the modus ponens steps and the $(\_) ∘ (\_)$ steps.
More examples involving the proof of distributivity for conjunction and disjunction and negation may be found here Distributivity (And Negation). Negation, can be introduced with the axiom
$$Z: (¬a ⊃ ¬b) ⊃ b ⊃ a,$$
but I won't talk about it in depth here. Another example involving negation is here Proof Of A Negation Formula. Providing an internal language, similar to that laid out above, for the $Z$ axiom or other properties of negation would require going outside the Typed λ-Calculus and Combinatory Logic and the original formulations of Curry-Howard into something larger involving "continuations", such as the λμ-Calculus. It's one of many alternative formulations posed in the literature and I don't see any sign of the literature settling on an overall consensus.