Chapter 20
| Era | Milestone | Key Figures | What Was Solved |
|---|---|---|---|
| 1943 | Formal neuron | McCulloch & Pitts | Mathematical model of neurons |
| 1949 | Hebbian learning | Donald Hebb | First learning principle |
| 1958 | Perceptron | Frank Rosenblatt | First learning machine |
| 1969 | Limitations proved | Minsky & Papert | Rigorous impossibility results |
| 1986 | Backpropagation | Rumelhart, Hinton & Williams | Multi-layer learning |
| 1989 | Universal Approximation | Hornik, Stinchcombe & White | Universal representation |
What can a network compute?
How does it acquire computation?
Are there fundamental limits?
| Property | M-P (1943) | Perceptron (1958) | MLP + Backprop (1986) |
|---|---|---|---|
| Learning | None | Perceptron rule | Backpropagation |
| Can learn? | No | Linearly separable | Any continuous fn |
| XOR? | Yes (manual) | No | Yes |
| Theory | Boolean completeness | Convergence thm | UAT |
| Key limitation | No learning | Linear separability | Vanishing gradient |
The complete arc — from formal neuron to universal approximator — took 46 years.
It required mathematicians, psychologists, physicists, and computer scientists.
It survived the AI winter.
Which classical problems — generalization, efficiency, biological plausibility —
matter most for the future of AI?