Working System
This chapter uses reusable code from book/advanced/sequent_imp.py.
We work in the implication-only fragment (atoms + \(\to\)). The implementation provides:
a depth-bounded proof search procedure in a sequent calculus,
extraction of a natural-deduction proof term (a lambda term),
an HTML proof-tree renderer styled like
bussproofs.
Proof search is not complete for all inputs (it is depth-bounded), but it is enough to explore the core ideas interactively.
14. Sequent Calculus Perspective and Equivalence to Natural Deduction#
Goal. Learn sequent calculus rules for implication and see how sequent proofs correspond to natural deduction and lambda terms.
You will learn
What a sequent \(\Gamma \vdash A\) means
The three core rules for \(\to\): axiom, right-introduction, left-introduction
How a proof search in sequent calculus can synthesize a proof term
How to render proofs as readable proof trees in the HTML book
Roadmap#
State the sequent rules for the implication fragment.
Run proof search on small goals and render proof trees.
Extract natural-deduction terms (lambda terms) from sequent proofs.
Use failures (e.g. Peirce’s law) to see the intuitionistic boundary.
Sequent Rules (Implication Fragment)#
A sequent has the form
where \(\Gamma\) is a context (assumptions) and \(A\) is the goal formula.
For implication-only logic, sequent calculus has a tiny core.
Rules
Axiom:
Right introduction (\(\to\)-intro):
Left introduction (\(\to\)-elim in sequent form):
Intuition: \(R\to\) is “assume \(A\) and prove \(B\)”; \(L\to\) is the structured way to use an implication in the context.
ND Correspondence (Proof Terms)#
In the Curry-Howard correspondence, implication corresponds to function types.
Axiom corresponds to using a variable.
\(R\to\) corresponds to lambda abstraction.
\(L\to\) corresponds to application (plus a substitution step).
So if sequent search finds a proof of \(\vdash A\), we can extract a lambda term (natural deduction proof term) and check it.
from pathlib import Path
import sys
book_root = Path.cwd()
if not (book_root / "advanced").exists():
book_root = book_root.parent
if str(book_root) not in sys.path:
sys.path.insert(0, str(book_root))
from advanced import sequent_imp as sq
from IPython.display import HTML, display
A First Proof Tree: Identity \(A \to A\)#
We ask sequent search to prove \(\vdash A \to A\).
We then:
render the sequent proof as a proof tree,
extract a natural-deduction term,
check the extracted term against the goal.
A = sq.Atom("A")
goal = sq.Imp(A, A)
pf, term = sq.prove_and_extract(goal, depth=10)
print("goal:", sq.show_f(goal))
print("found proof:", pf is not None)
print("extracted term:", sq.show_t(term))
print("ND check:", sq.check_nd(term, goal, {}))
display(HTML(sq.render_proof_tree(pf)))
goal: (A -> A)
found proof: True
extracted term: (lambda h0. h0)
ND check: True
Two Canonical Shapes: K and S#
Many proof terms in implication-only logic correspond to classic combinators.
K: \(A \to (B \to A)\)
S: \((A \to (B \to C)) \to ((A \to B) \to (A \to C))\)
These are important because:
they show how purely logical rules synthesize reusable programs,
they form a basis for combinatory logic (no variables).
A = sq.Atom("A")
B = sq.Atom("B")
C = sq.Atom("C")
k_goal = sq.Imp(A, sq.Imp(B, A))
s_goal = sq.Imp(
sq.Imp(A, sq.Imp(B, C)),
sq.Imp(sq.Imp(A, B), sq.Imp(A, C)),
)
for name, g, depth in [("K", k_goal, 14), ("S", s_goal, 26)]:
pf, term = sq.prove_and_extract(g, depth=depth)
print("\n" + name)
print(" goal:", sq.show_f(g))
print(" term:", sq.show_t(term))
print(" check:", sq.check_nd(term, g, {}))
display(HTML(sq.render_proof_tree(pf)))
K
goal: (A -> (B -> A))
term: (lambda h0. (lambda h1. h0))
check: True
S
goal: ((A -> (B -> C)) -> ((A -> B) -> (A -> C)))
term: (lambda h0. (lambda h1. (lambda h2. ((h0 h2) (h1 h2)))))
check: True
Proof Search and Intuitionistic Boundaries#
Because this fragment corresponds to intuitionistic logic, some classical principles are not derivable.
A standard example is Peirce’s law:
Our proof search should fail to find it.
A = sq.Atom("A")
B = sq.Atom("B")
peirce = sq.Imp(sq.Imp(sq.Imp(A, B), A), A)
print("goal:", sq.show_f(peirce))
for d in [8, 10, 12, 16, 20]:
pf = sq.prove((), peirce, depth=d)
print(" depth", d, "-> provable:", pf is not None)
goal: (((A -> B) -> A) -> A)
depth 8 -> provable: False
depth 10 -> provable: False
depth 12 -> provable: False
depth 16 -> provable: False
depth 20 -> provable: False
Reading Equivalence Computationally#
One good way to read “sequent calculus = natural deduction” is:
sequent proofs provide a goal-directed way to search for proofs,
extracted terms are exactly the natural-deduction proof terms (lambda programs),
type-checking / goal-checking the term is the computational confirmation.
Limitations
The proof search here is depth-bounded and tuned for pedagogy. For real proof search, you would add focusing/strategies and richer connectives.
Reuse Note
All core definitions are reusable from advanced/sequent_imp.py:
formulas:
Atom,Impproof search:
prove(ctx, goal, depth=...)extraction:
extract_nd(proof)ND checking:
check_nd(term, goal, ctx)proof trees:
render_proof_tree(proof)
Try turning show_context=True in render_proof_tree if you want to show full sequents at each node.
Exercise
Prove and render \(A \to (A \to A)\) and extract the term.
Prove \((B \to C) \to (A \to B) \to (A \to C)\) (composition) and inspect the extracted term.
Show that Peirce’s law fails for increasing depth bounds.
Build a non-empty context
ctx = (("h", A->B), ("a", A))and proveBfrom it; confirm the extracted term ish a. See solutions: Exercise 14 (Sequent Calculus)
Summary#
Sequent calculus is a structured proof system for \(\Gamma \vdash A\) judgments.
In the implication fragment, three rules (Ax, R→, L→) already generate rich proof terms.
Proof search synthesizes sequent proofs; extraction yields natural deduction / lambda terms.
Failure cases (like Peirce) cleanly illustrate the intuitionistic vs classical boundary.
Next: Chapter 15 adds control operators to recover classical reasoning computationally.