Working System

This chapter uses reusable code from book/advanced/stlc_nbe.py. The notebook is organized into small chunks: syntax, typing, semantic evaluation, and NbE normalization.

12. Normalization by Evaluation (NbE)#

Goal. Explain and implement NbE with enough mathematical context to connect it to proof normalization.

For simply typed lambda calculus (STLC), NbE computes beta-eta normal forms by:

  1. evaluating a term semantically,

  2. reifying semantic values back to syntax.

You will learn

  • The semantic idea behind NbE: compute normal forms via evaluation, not by rewriting syntax

  • The roles of neutrals and normal forms in STLC

  • The mutually recursive maps reflect and reify

  • How NbE produces beta-eta normal forms (including eta-long structure)

  • How to use the reusable advanced/stlc_nbe.py prototype

Roadmap#

  1. State the semantic setup (types, neutrals, normal forms).

  2. Implement STLC syntax and a type checker (provided by advanced/stlc_nbe.py).

  3. Implement semantic evaluation and the reflect/reify bridge.

  4. Run NbE on small and composite examples.

  5. Inspect eta behavior and connect to proof normalization.

Mathematical Background#

Let \(\lbrack\!\lbrack A \rbrack\!\rbrack\) denote the semantic interpretation of type \(A\).

\[ \lbrack\!\lbrack \alpha \rbrack\!\rbrack = \text{Neutrals at } \alpha, \qquad \lbrack\!\lbrack A \to B \rbrack\!\rbrack = \lbrack\!\lbrack A \rbrack\!\rbrack \to \lbrack\!\lbrack B \rbrack\!\rbrack. \]

NbE uses two mutually recursive maps for each type \(A\):

\[ \mathrm{reflect}_A : \mathrm{Ne}_A \to \lbrack\!\lbrack A \rbrack\!\rbrack, \qquad \mathrm{reify}_A : \lbrack\!\lbrack A \rbrack\!\rbrack \to \mathrm{NF}_A. \]

Given a well-typed term \(\Gamma \vdash t : A\), define

\[ \mathrm{nbe}(t) := \mathrm{reify}_A(\lbrack\!\lbrack t \rbrack\!\rbrack), \]

where \(\lbrack\!\lbrack t \rbrack\!\rbrack\) is semantic evaluation.

Why This Is Better Than Naive Rewriting#

Naive beta-reduction repeatedly rewrites syntax and can duplicate large terms. NbE pushes work into semantic function space and only reconstructs syntax once at the end.

Conceptually:

\[ \text{Syntax} \xrightarrow{\text{eval}} \text{Semantics} \xrightarrow{\text{reify}} \text{Normal Form Syntax}. \]

This pattern is standard in proof assistants and dependently typed kernels.

from pathlib import Path
import sys

book_root = Path.cwd()
if not (book_root / "advanced").exists():
    book_root = book_root.parent
if str(book_root) not in sys.path:
    sys.path.insert(0, str(book_root))

from advanced import stlc_nbe as nbe

Syntax and Type Constructors#

We first build reusable terms and inspect their printed form.

A = nbe.Base("A")
B = nbe.Base("B")

id_tm = nbe.TLam(A, nbe.TVar(0))
const_tm = nbe.TLam(A, nbe.TLam(B, nbe.TVar(1)))

print("id   =", nbe.show_tm(id_tm))
print("const=", nbe.show_tm(const_tm))
id   = (lambda x0:A. x0)
const= (lambda x0:A. (lambda x1:B. x0))

Typing Judgments#

In STLC, typing rules include:

\[ \frac{\Gamma, x:A \vdash t:B}{\Gamma \vdash \lambda x:A.t : A\to B} \qquad \frac{\Gamma \vdash f:A\to B \quad \Gamma \vdash u:A}{\Gamma \vdash f\;u:B}. \]

We check these algorithmically via infer.

print("type(id)   =", nbe.show_ty(nbe.infer(id_tm, [])))
print("type(const)=", nbe.show_ty(nbe.infer(const_tm, [])))

assert nbe.infer(id_tm, []) == nbe.Arr(A, A)
assert nbe.infer(const_tm, []) == nbe.Arr(A, nbe.Arr(B, A))
type(id)   = (A -> A)
type(const)= (A -> (B -> A))

NbE on a Composite Term#

We normalize ((const id) id), which should collapse to id.

X = nbe.Arr(A, A)
const_on_X = nbe.TLam(X, nbe.TLam(X, nbe.TVar(1)))
term = nbe.TApp(nbe.TApp(const_on_X, id_tm), id_tm)

print("term       =", nbe.show_tm(term))
print("term type  =", nbe.show_ty(nbe.infer(term, [])))

nf_term = nbe.nbe(term, X)
print("normal form=", nbe.show_tm(nf_term))
term       = (((lambda x0:(A -> A). (lambda x1:(A -> A). x0)) (lambda x0:A. x0)) (lambda x0:A. x0))
term type  = (A -> A)
normal form= (lambda x0:A. x0)

Eta-Long Shape#

For function type A \to A, normal forms are lambda terms. NbE reifies semantic functions back into explicit lambda form, so output is eta-long where needed.

eta_candidate = nbe.TLam(A, nbe.TApp(id_tm, nbe.TVar(0)))
eta_nf = nbe.nbe(eta_candidate, nbe.Arr(A, A))

print("input :", nbe.show_tm(eta_candidate))
print("nbe   :", nbe.show_tm(eta_nf))
input : (lambda x0:A. ((lambda x1:A. x1) x0))
nbe   : (lambda x0:A. x0)

Reuse Note

All core definitions are reusable from advanced/stlc_nbe.py:

  • infer, eval_tm, reify, reflect, nbe

  • term/type constructors and printers.

You can import this module in other notebooks or scripts.

Exercise

  1. Define a compose term in STLC and normalize compose id id. Confirm the result is id.

  2. Build a term that is beta-normal but eta-reduces (for example, \x:A. (id x)) and confirm NbE returns the eta-short normal form.

  3. Use reflect and reify directly: reflect a neutral variable at a function type and reify it back to syntax.

  4. Construct an ill-typed application and observe the type checker error. Explain why the application is ill-typed. See solutions: Exercise 12 (Normalization by Evaluation)

Summary#

  • NbE computes beta-eta normal forms by evaluating into a semantic domain and then reifying back to syntax.

  • Neutrals represent stuck computations; normal forms are built from lambdas and neutrals.

  • reflect and reify are the bridge between syntax and semantics.

  • In STLC, NbE is a principled alternative to explicit beta-reduction on syntax.

  • Next: the later chapters build on these ideas (dependent types, sequent calculi, control, and polymorphism).