Pick a preset or paste your own text, then train a character-level LSTM directly in your browser. The hidden-state heatmap shows which neurons activate as the network generates new characters — a glimpse into what the network actually learned.
Input Text
Hyperparameters
Generation
Training Loss
Hidden State Activations
Rows = neurons, columns = generated characters. blue = inhibited, red = excited.
Top-3 Most Active Neurons
Train, then generate to see what neurons fire for.
Generated Text
What's happening under the hood?
A character-level RNN reads your text one character at a time, building up a hidden state that summarizes "everything I've seen so far." During training, it learns to predict the next character from this hidden state. To generate, we feed it a seed, sample from its predictions, append the result, and repeat.
Temperature controls randomness: T=0.1 makes the network deterministic
and repetitive, T=1.0 samples from the learned distribution, T=2.0 adds
chaos and surprise.
Inspired by Andrej Karpathy's "The Unreasonable Effectiveness of Recurrent Neural Networks" (2015).