Hands-on applets for Classical Foundations of Artificial Neural Networks
← Back to course notes · Presentation slides →Part I — Origins (Chapters 1–3)
Toggle inputs, change thresholds, and watch logic gates compute in real time. See how AND, OR, NOT, NAND, and NOR emerge from a single neuron model.
Chapter 1Watch signals propagate through a multi-layer network that solves XOR. Animated pulse flow shows how the hidden layer transforms the problem.
Chapter 2Drag-and-drop neurons, wire them together, and simulate step-by-step. Build logic gates, latches, counters, and finite automata from scratch.
Chapters 1–3Part II — The Perceptron (Chapters 4–7)
Drag sliders to adjust weights and bias. See the decision boundary rotate and shift in real time. Understand the geometric meaning of the perceptron.
Chapter 4Draw data points, then train the perceptron step-by-step or in animated mode. Watch the decision boundary converge — or fail on XOR.
Chapter 5Adjust data spread R and margin γ to see how the convergence bound k ≤ R²/γ² changes. Visualize the squeeze proof in action.
Chapter 6Explore all 16 two-input Boolean functions. Then drag 3 points to test shattering and understand VC dimension interactively.
Chapter 7Part V — Backpropagation (Chapters 15–19)
Click any weight in a feedforward network to see its gradient derived step-by-step via BP1–BP3. 12 topologies from simple perceptrons to deep funnels.
Chapter 16Full numerical walkthrough of ∂L/∂w in a [3,2,2,1] network. Every forward & backward step shown with color-coded BP1–BP4 derivations.
Chapters 16–18 · PDF versionInteractive Papers — Monico (2024): An Elementary UAT Proof
Adjust points x⊂0, x⊂1 and tolerance ϵ to see how σ(s+tx) separates them. Watch the 2×2 linear system solution update in real time.
Lemma 3.1 · Course notesDrag a point x⊂0 around a compact set B and watch the N⊂2 separator build itself: open covers, finite subcovers, and summed sigmoids.
Lemma 3.2 · Course notesChoose disjoint compact sets A and B, then visualize the N⊂3 separator as a heatmap. Toggle between raw h and squashed H to see double-squashing in action.
Lemma 3.3 · Course notesStep through the proof of Theorem 3.4: pick a target, build an approximation, identify U⁺ and U⁻, apply Lemma 3.3, and watch the contradiction emerge.
Theorem 3.4 · Course notesWatch a network build itself neuron-by-neuron to approximate any function. Weights shown with color; hover to see activations flow through the architecture.
Theorem 3.4 · Course notes