Hands-on applets for Classical Foundations of Artificial Neural Networks
← Back to course notes · Presentation slides →Part I — Origins (Chapters 1–3)
Toggle inputs, change thresholds, and watch logic gates compute in real time. See how AND, OR, NOT, NAND, and NOR emerge from a single neuron model.
Chapter 1Watch signals propagate through a multi-layer network that solves XOR. Animated pulse flow shows how the hidden layer transforms the problem.
Chapter 2Drag-and-drop neurons, wire them together, and simulate step-by-step. Build logic gates, latches, counters, and finite automata from scratch.
Chapters 1–3Part II — The Perceptron (Chapters 4–7)
Drag sliders to adjust weights and bias. See the decision boundary rotate and shift in real time. Understand the geometric meaning of the perceptron.
Chapter 4Draw data points, then train the perceptron step-by-step or in animated mode. Watch the decision boundary converge — or fail on XOR.
Chapter 5Adjust data spread R and margin γ to see how the convergence bound k ≤ R²/γ² changes. Visualize the squeeze proof in action.
Chapter 6Explore all 16 two-input Boolean functions. Then drag 3 points to test shattering and understand VC dimension interactively.
Chapter 7Part V — Backpropagation (Chapters 15–19)
Click any weight in a feedforward network to see its gradient derived step-by-step via BP1–BP3. 12 topologies from simple perceptrons to deep funnels.
Chapter 16Full numerical walkthrough of ∂L/∂w in a [3,2,2,1] network. Every forward & backward step shown with color-coded BP1–BP4 derivations.
Chapters 16–18 · PDF versionPart VII — Convolutional Neural Networks (Chapters 21–25)
Part VIII — Modern Optimization (Chapters 26–28)
Part X — Recurrent Neural Networks & LSTM (Chapters 32–36)
Step through a sequence and watch the hidden state evolve. See the folded RNN cell unfold into a chain through time.
Chapter 32Watch forget, input, and output gates open and close as the LSTM processes a sequence. See the cell state accumulate information.
Chapter 34Part X — Train Your Own (TF.js in the Browser)
Paste any text (Polish poetry, Python, ABC music notation), train a char-RNN in your browser, watch hidden-state activations as it generates new text.
Chapter 35Tap a melody on the on-screen piano, train an LSTM, hear what it composes. Compare LSTM vs vanilla RNN on the same sequence.
Chapter 34Draw a shape, train an LSTM on stroke deltas, watch the network sketch new variations on its own. Inspired by Google's Sketch-RNN.
Chapter 36Train an LSTM on ASCII art (smileys, cats, houses) and watch it generate new pictures row-by-row in a terminal-green output.
Chapter 35Program 16-step beats on a step sequencer, train an LSTM to learn your groove, generate new patterns. Real Web Audio drums.
Chapter 35Part XI — Attention & Transformers (Chapters 37–40)
Type any string and watch the attention map align outputs to inputs. Reverse, copy, shift — see the bottleneck dissolve in real time.
Chapter 37Drag the dimension slider and watch the unscaled softmax collapse into a one-hot. The variance derivation made tactile.
Chapter 38Click any token in a sentence to see what self-attention weights it assigns to every other token. Compare 4 heads — recency, similarity, position, distance.
Chapter 39Browse a 2-layer × 4-head Transformer's attention matrices: cross, encoder-self, decoder-self. See what each head learned.
Chapter 40Self-Assessment
Interactive Papers — Monico (2024): An Elementary UAT Proof
Adjust points x⊂0, x⊂1 and tolerance ϵ to see how σ(s+tx) separates them. Watch the 2×2 linear system solution update in real time.
Lemma 3.1 · Course notesDrag a point x⊂0 around a compact set B and watch the N⊂2 separator build itself: open covers, finite subcovers, and summed sigmoids.
Lemma 3.2 · Course notesChoose disjoint compact sets A and B, then visualize the N⊂3 separator as a heatmap. Toggle between raw h and squashed H to see double-squashing in action.
Lemma 3.3 · Course notesStep through the proof of Theorem 3.4: pick a target, build an approximation, identify U⁺ and U⁻, apply Lemma 3.3, and watch the contradiction emerge.
Theorem 3.4 · Course notesWatch a network build itself neuron-by-neuron to approximate any function. Weights shown with color; hover to see activations flow through the architecture.
Theorem 3.4 · Course notes