← All Applets

RNN Text Lab

Paste text, train a small RNN, watch it learn to write.

Pick a preset or paste your own text, then train a character-level LSTM directly in your browser. The hidden-state heatmap shows which neurons activate as the network generates new characters — a glimpse into what the network actually learned.

Input Text

0 chars 0 vocab 0 params

Hyperparameters

Generation

0.8
200

Training Loss

Pick a preset and click Train.

Hidden State Activations

Rows = neurons, columns = generated characters. blue = inhibited, red = excited.

Top-3 Most Active Neurons

Train, then generate to see what neurons fire for.

Generated Text

Output will appear here as the network types.
What's happening under the hood?

A character-level RNN reads your text one character at a time, building up a hidden state that summarizes "everything I've seen so far." During training, it learns to predict the next character from this hidden state. To generate, we feed it a seed, sample from its predictions, append the result, and repeat.

Temperature controls randomness: T=0.1 makes the network deterministic and repetitive, T=1.0 samples from the learned distribution, T=2.0 adds chaos and surprise.

Inspired by Andrej Karpathy's "The Unreasonable Effectiveness of Recurrent Neural Networks" (2015).

← Back to course