Working System
This chapter uses reusable code from book/advanced/stlc_nbe.py.
The notebook is organized into small chunks: syntax, typing, semantic evaluation, and NbE normalization.
12. Normalization by Evaluation (NbE)#
Goal. Explain and implement NbE with enough mathematical context to connect it to proof normalization.
For simply typed lambda calculus (STLC), NbE computes beta-eta normal forms by:
evaluating a term semantically,
reifying semantic values back to syntax.
You will learn
The semantic idea behind NbE: compute normal forms via evaluation, not by rewriting syntax
The roles of neutrals and normal forms in STLC
The mutually recursive maps
reflectandreifyHow NbE produces beta-eta normal forms (including eta-long structure)
How to use the reusable
advanced/stlc_nbe.pyprototype
Roadmap#
State the semantic setup (types, neutrals, normal forms).
Implement STLC syntax and a type checker (provided by
advanced/stlc_nbe.py).Implement semantic evaluation and the
reflect/reifybridge.Run NbE on small and composite examples.
Inspect eta behavior and connect to proof normalization.
Mathematical Background#
NbE is easiest to understand if you separate three things:
Syntax: terms you write (lambda terms).
Semantics: values produced by evaluation (functions, or stuck computations).
Normal forms: a canonical syntactic representative after normalization.
We write \(\den{A}\) (also spelled \(\lbrack\!\lbrack A \rbrack\!\rbrack\)) for the semantic interpretation of a type \(A\).
The key syntactic classes are neutrals and normal forms. Intuitively:
A neutral is a computation that is stuck because it is waiting for a variable.
A normal form is either a lambda (at function type) or a neutral (at base type).
For a base type \(\alpha\), neutrals look like
(a variable applied to some arguments), and normal forms look like
NbE uses two mutually recursive maps for each type \(A\):
Given a well-typed term \(\Gamma \vdash t : A\), define
In other words: evaluate to a semantic value, then reify back to a normal form.
Why This Is Better Than Naive Rewriting#
A normalizer by syntax rewriting performs many small beta steps and repeatedly rebuilds terms. That is conceptually simple but operationally noisy: it produces long traces and (in richer systems) can duplicate subterms.
NbE is a different pattern:
You can think of it as using the host language (here Python closures) as the semantic function space, and only rebuilding syntax once at the end.
For STLC, this produces a beta-eta long normal form (eta-long shape at function types).
from pathlib import Path
import sys
book_root = Path.cwd()
if not (book_root / 'advanced').exists():
book_root = book_root.parent
if str(book_root) not in sys.path:
sys.path.insert(0, str(book_root))
from advanced import stlc_nbe as nbe
Syntax and Type Constructors#
The prototype in advanced/stlc_nbe.py uses a tiny STLC:
base types
Base("A"),Base("B"), …arrow types
Arr(A, B)
Terms are typed lambdas with de Bruijn indices:
TVar(0)refers to the nearest binderTVar(1)refers to the next-outer binder
This avoids alpha-renaming machinery, but it is worth reading a few concrete examples.
A = nbe.Base('A')
B = nbe.Base('B')
id_tm = nbe.TLam(A, nbe.TVar(0))
const_tm = nbe.TLam(A, nbe.TLam(B, nbe.TVar(1)))
print('id =', nbe.show_tm(id_tm))
print('const=', nbe.show_tm(const_tm))
# A small de Bruijn example: (lambda x:A. (lambda y:B. x))
example = nbe.TLam(A, nbe.TLam(B, nbe.TVar(1)))
print('example =', nbe.show_tm(example))
id = (lambda x0:A. x0)
const= (lambda x0:A. (lambda x1:B. x0))
example = (lambda x0:A. (lambda x1:B. x0))
Typing Judgments#
In STLC, typing rules include:
The function infer(term, ctx_types) checks these algorithmically (and fails with an exception on mismatch).
print('type(id) =', nbe.show_ty(nbe.infer(id_tm, [])))
print('type(const)=', nbe.show_ty(nbe.infer(const_tm, [])))
assert nbe.infer(id_tm, []) == nbe.Arr(A, A)
assert nbe.infer(const_tm, []) == nbe.Arr(A, nbe.Arr(B, A))
type(id) = (A -> A)
type(const)= (A -> (B -> A))
NbE on a Composite Term (and Comparison to Beta Steps)#
We’ll normalize a composite term and compare two routes:
Naive beta rewriting on syntax (produces a trace).
NbE (evaluate then reify).
Both should end at the same normal form.
# Build the term ((const id) id) in a way that keeps types explicit.
X = nbe.Arr(A, A)
const_on_X = nbe.TLam(X, nbe.TLam(X, nbe.TVar(1))) # const specialized to X
term = nbe.TApp(nbe.TApp(const_on_X, id_tm), id_tm)
print('term =', nbe.show_tm(term))
print('term type =', nbe.show_ty(nbe.infer(term, [])))
# Route 1: naive beta rewriting trace on syntax
trace = nbe.beta_trace(term, max_steps=10)
for i, t in enumerate(trace):
print(f'beta t{i} = {nbe.show_tm(t)}')
# Route 2: NbE
nf_term = nbe.nbe(term, X)
print('nbe nf =', nbe.show_tm(nf_term))
term = (((lambda x0:(A -> A). (lambda x1:(A -> A). x0)) (lambda x0:A. x0)) (lambda x0:A. x0))
term type = (A -> A)
beta t0 = (((lambda x0:(A -> A). (lambda x1:(A -> A). x0)) (lambda x0:A. x0)) (lambda x0:A. x0))
beta t1 = ((lambda x0:(A -> A). (lambda x1:A. x1)) (lambda x0:A. x0))
beta t2 = (lambda x0:A. x0)
nbe nf = (lambda x0:A. x0)
Open Terms, Neutrals, and Eta-Long Shape#
A key reason NbE talks about neutrals is that evaluation may encounter a variable.
For example, in a context \(\Gamma = (f : A\to A)\), the term f is already beta-normal,
but NbE will still produce a canonical eta-long representative at function type:
The helper nbe_open(term, ty, ctx_types) performs NbE relative to a typing context.
A = nbe.Base('A')
X = nbe.Arr(A, A)
# In context [X], the variable TVar(0) has type X.
ctx_tys = [X]
f = nbe.TVar(0)
eta_long = nbe.nbe_open(f, X, ctx_tys)
print('input (open) =', nbe.show_tm(f, ctx=['f']))
print('nbe_open =', nbe.show_tm(eta_long, ctx=['f']))
# You can also see reflect/reify directly at a function type.
neu = nbe.NVar(0)
val = nbe.reflect(X, neu, lvl=1)
back = nbe.reify(X, val, lvl=1)
print('reflect+reify=', nbe.show_tm(back, ctx=['f']))
input (open) = f
nbe_open = (lambda x1:A. (f f))
reflect+reify= (lambda x1:A. (f f))
Reuse Note
All core definitions are reusable from advanced/stlc_nbe.py:
typing:
infersemantics:
eval_tm,reflect,reifyNbE:
nbe(closed terms) andnbe_open(open terms)optional comparison:
beta_trace(naive beta rewriting on syntax)
The notebook focuses on how to read these pieces mathematically and how to experiment with them.
Exercise
Define a
composeterm in STLC and normalizecompose id id. Confirm the result is extensionallyid.Use
beta_traceon a composite term (for example((const id) id)) and compare its last element tonbe(...).Use
nbe_opento normalize an open variablefat function typeA->Ain context[A->A]. Observe the eta-long lambda it produces.Construct an ill-typed application and observe the type checker error. Explain which typing rule fails. See solutions: Exercise 12 (Normalization by Evaluation)
Summary#
NbE computes canonical normal forms by evaluating into a semantic domain and then reifying back to syntax.
Neutrals represent stuck computations; they are essential for normalizing open terms.
reflectandreifyform the bridge between syntax and semantics (and enforce eta-long shape at arrows).A naive beta reducer produces long traces; NbE reaches the same normal form by a different route.
Next: Chapter 13 generalizes the idea of types-as-propositions further via dependent types (Π/Σ).