Program a melody on the on-screen piano (or pick a preset), then train an LSTM on the note sequence. The trained network will generate continuations that you can play back. Compare with a vanilla RNN to see why gates matter for music.
1. Piano Input — Compose Your Melody
Recorded sequence (0 notes)
2. Train the Network
Loss Curves
3. LSTM Gates (during playback)
Train the LSTM, then play the generated melody to see the gates pulse.
4. Generated Melody
Generated sequence
Piano roll
Why does LSTM beat vanilla RNN here?
Music has long-range structure: the chord progression that resolves at the end depends on the key established many notes earlier. A vanilla RNN's hidden state is overwritten with each new note, so it forgets early context. The LSTM's cell state persists through gating: the forget gate selectively erases stale info, the input gate writes new info, and the output gate decides what to expose. Watch the gates during playback — you'll often see the forget gate clear at phrase boundaries.