The perceptron learning algorithm updates weights when it makes a mistake: w ← w + η(y − ŷ)x. For a false negative (missed positive point), the boundary rotates toward the point. For a false positive, it rotates away. If the data is linearly separable, this process is guaranteed to converge. Try XOR to see what happens when it isn't!

Errors per Epoch

Controls

Click the canvas to place points, or pick a preset:

w₁
0.00
w₂
0.00
b
0.00
Epoch
0
Errors
0
Ready — draw points or pick a preset
CaseyŷUpdate
Correct (pos)11None
Correct (neg)00None
False negative10w + ηx
False positive01w − ηx