Working System
This chapter is in the simply-typed lambda calculus (STLC), corresponding to intuitionistic propositional logic.
4. Propositional Logic via Curry-Howard#
Goal. Reconstruct core propositional logic using the simply-typed lambda calculus. Proofs become programs, and propositions become types.
You will learn
How implication, conjunction, and disjunction correspond to function, product, and sum types
How introduction and elimination rules become term constructors and evaluators
How to read a typed lambda term as a proof object (intuitionistic logic)
How to type-check proof terms automatically in Python
Roadmap#
Define the proposition/type language (A -> B, A and B, A or B, top, bottom).
Introduce term constructors (lambda, application, pairs, injections, case).
Implement and use a type checker as an automatic proof validator.
Build proof terms for core theorems (K, projections, pairing, commutativity).
Connect proof terms to proof trees and normalization.
Definition (Core Propositions)
We model a minimal propositional logic with:
Implication:
A → BConjunction:
A ∧ BDisjunction:
A ∨ BTruth:
⊤Falsehood:
⊥
In the simply-typed lambda calculus:
A proof of
A → Bis a function from proofs ofAto proofs ofB.A proof of
A ∧ Bis a pair of proofs(a, b).A proof of
A ∨ Bis eitherinl aorinr b.⊤has a single trivial proofunit.⊥has no proofs; from⊥anything follows (ex falso).
This corresponds to intuitionistic propositional logic. Classical principles such as excluded middle are not derivable here without extra axioms.
from pathlib import Path
import sys
book_root = Path.cwd()
if not (book_root / 'stlc_core.py').exists():
book_root = book_root.parent
if str(book_root) not in sys.path:
sys.path.insert(0, str(book_root))
from stlc_core import (
TyVar,
Arrow,
And,
Or,
Unit,
Void,
Type,
show_type,
Var,
Lam,
App,
Pair,
Fst,
Snd,
Inl,
Inr,
Case,
UnitVal,
Abort,
Term,
type_of,
)
Remark (Normalization = Proof Simplification)
Typing is the main focus of this chapter: we check that terms are valid proofs. Beta-reduction then simplifies those proof terms to normal form (proof normalization).
Explicit reduction engines are developed in Chapters 2 and 8.
Example (Identity)
λx:A. x proves A → A.
A = TyVar("A")
B = TyVar("B")
identity = Lam("x", A, Var("x"))
print(show_type(type_of(identity, {})))
(A -> A)
Example (And-Introduction)
λx:A. λy:B. (x, y) proves A → B → (A ∧ B).
and_intro = Lam("x", A, Lam("y", B, Pair(Var("x"), Var("y"))))
print(show_type(type_of(and_intro, {})))
(A -> (B -> (A ∧ B)))
Lemma (Commutativity of ∧)
A ∧ B → B ∧ A is provable via λp. (snd p, fst p).
and_comm = Lam("p", And(A, B), Pair(Snd(Var("p")), Fst(Var("p"))))
print(show_type(type_of(and_comm, {})))
((A ∧ B) -> (B ∧ A))
Lemma (Commutativity of ∨)
A ∨ B → B ∨ A is provable by case analysis.
or_comm = Lam(
"p",
Or(A, B),
Case(
Var("p"),
"a",
Inr(Var("a"), B),
"b",
Inl(Var("b"), A),
),
)
print(show_type(type_of(or_comm, {})))
((A ∨ B) -> (B ∨ A))
Visual: Proof Trees (Bussproofs Style)#
The derivations below mirror the terms proved in this chapter.
Example (Ex Falso)
From ⊥ we can derive any proposition A.
ex_falso = Lam("v", Void(), Abort(Var("v"), A))
print(show_type(type_of(ex_falso, {})))
(⊥ -> A)
Worked Normalization Example (Implication Fragment)#
We now show an explicit proof-normalization step for implication-only terms. Let
in context \(\Gamma = \{u:A,\ v:B\}\). The type of \(t\) is \(A\). Beta-reduction gives:
So normalization removes detours and yields the canonical proof term u : A.
# Small beta-reducer for the implication fragment (Var/Lam/App)
def show_imp(t: Term) -> str:
if isinstance(t, Var):
return t.name
if isinstance(t, Lam):
return f"(lambda {t.var}:{show_type(t.var_type)}. {show_imp(t.body)})"
if isinstance(t, App):
return f"({show_imp(t.fn)} {show_imp(t.arg)})"
raise TypeError("show_imp expects implication-fragment terms")
def free_vars_imp(t: Term):
if isinstance(t, Var):
return {t.name}
if isinstance(t, Lam):
return free_vars_imp(t.body) - {t.var}
if isinstance(t, App):
return free_vars_imp(t.fn) | free_vars_imp(t.arg)
raise TypeError("free_vars_imp expects implication-fragment terms")
def fresh_name(used, hint="x"):
i = 0
while True:
cand = f"{hint}{i}"
if cand not in used:
return cand
i += 1
def rename_bound_imp(body: Term, old: str, new: str) -> Term:
if isinstance(body, Var):
return Var(new) if body.name == old else body
if isinstance(body, App):
return App(rename_bound_imp(body.fn, old, new), rename_bound_imp(body.arg, old, new))
if isinstance(body, Lam):
if body.var == old:
return body
return Lam(body.var, body.var_type, rename_bound_imp(body.body, old, new))
raise TypeError("rename_bound_imp expects implication-fragment terms")
def subst_imp(t: Term, var: str, replacement: Term) -> Term:
if isinstance(t, Var):
return replacement if t.name == var else t
if isinstance(t, App):
return App(subst_imp(t.fn, var, replacement), subst_imp(t.arg, var, replacement))
if isinstance(t, Lam):
if t.var == var:
return t
if var not in free_vars_imp(t.body):
return t
if t.var in free_vars_imp(replacement):
used = free_vars_imp(t.body) | free_vars_imp(replacement) | {t.var, var}
new_var = fresh_name(used, t.var)
renamed = Lam(new_var, t.var_type, rename_bound_imp(t.body, t.var, new_var))
return Lam(renamed.var, renamed.var_type, subst_imp(renamed.body, var, replacement))
return Lam(t.var, t.var_type, subst_imp(t.body, var, replacement))
raise TypeError("subst_imp expects implication-fragment terms")
def beta_step_imp(t: Term):
if isinstance(t, App) and isinstance(t.fn, Lam):
return subst_imp(t.fn.body, t.fn.var, t.arg), True
if isinstance(t, App):
fn_new, changed = beta_step_imp(t.fn)
if changed:
return App(fn_new, t.arg), True
arg_new, changed = beta_step_imp(t.arg)
if changed:
return App(t.fn, arg_new), True
return t, False
if isinstance(t, Lam):
body_new, changed = beta_step_imp(t.body)
if changed:
return Lam(t.var, t.var_type, body_new), True
return t, False
return t, False
def trace_normalize_imp(t: Term, max_steps: int = 20):
out = [t]
cur = t
for _ in range(max_steps):
cur, changed = beta_step_imp(cur)
if not changed:
break
out.append(cur)
return out
u = Var("u")
v = Var("v")
term = App(App(Lam("x", A, Lam("y", B, Var("x"))), u), v)
ctx = {"u": A, "v": B}
print("Type before normalization:", show_type(type_of(term, ctx)))
trace = trace_normalize_imp(term)
for i, t in enumerate(trace):
print(f"t{i} = {show_imp(t)}")
print("Type after normalization:", show_type(type_of(trace[-1], ctx)))
Type before normalization: A
t0 = (((lambda x:A. (lambda y:B. x)) u) v)
t1 = ((lambda y:B. u) v)
t2 = u
Type after normalization: A
Automatic Proof-Term Checking#
Given a candidate proof term t and a goal type G, we can automatically check type_of(t) = G.
This is useful for fast validation while developing proofs in notebooks.
def check_goal(name: str, term: Term, goal: Type, ctx=None, strict: bool = True) -> bool:
if ctx is None:
ctx = {}
inferred = type_of(term, ctx)
ok = inferred == goal
status = "OK" if ok else "FAIL"
print(f"[{status}] {name}: inferred {show_type(inferred)} ; goal {show_type(goal)}")
if strict and not ok:
raise AssertionError(f"Goal check failed for {name}")
return ok
check_goal("identity", identity, Arrow(A, A))
check_goal("and_intro", and_intro, Arrow(A, Arrow(B, And(A, B))))
check_goal("and_comm", and_comm, Arrow(And(A, B), And(B, A)))
check_goal("or_comm", or_comm, Arrow(Or(A, B), Or(B, A)))
check_goal("ex_falso", ex_falso, Arrow(Void(), A))
[OK] identity: inferred (A -> A) ; goal (A -> A)
[OK] and_intro: inferred (A -> (B -> (A ∧ B))) ; goal (A -> (B -> (A ∧ B)))
[OK] and_comm: inferred ((A ∧ B) -> (B ∧ A)) ; goal ((A ∧ B) -> (B ∧ A))
[OK] or_comm: inferred ((A ∨ B) -> (B ∨ A)) ; goal ((A ∨ B) -> (B ∨ A))
[OK] ex_falso: inferred (⊥ -> A) ; goal (⊥ -> A)
True
Exercise
Construct a term proving
A -> (B -> A and B).Build a term proving
A and B -> A.Show that
A -> (B -> A)is derivable (the K combinator).Construct a term proving commutativity of disjunction:
A or B -> B or A. See solution: Exercise 4 (Propositional Logic)
Summary#
In STLC (intuitionistic), types are propositions and terms are proofs.
Implication corresponds to functions; conjunction to pairs; disjunction to tagged injections with case analysis.
Type checking is proof checking: if
type_of(t) = A, thentis evidence forA.Operationally: use
check_goal(...)to validate candidate proof terms while you build them.Normalization corresponds to proof simplification (a preview of later chapters).
Next: encode data (booleans and numbers) in the untyped lambda calculus via Church numerals (Chapter 5).