← All Applets

LSTM Music Studio

Tap a tune, train an LSTM, hear what it composes.

Program a melody on the on-screen piano (or pick a preset), then train an LSTM on the note sequence. The trained network will generate continuations that you can play back. Compare with a vanilla RNN to see why gates matter for music.

1. Piano Input — Compose Your Melody

Recorded sequence (0 notes)

2. Train the Network

Loss Curves

■ LSTM ■ Vanilla RNN
Ready to train.

3. LSTM Gates (during playback)

forget
input
output
forget — what to discard
input — what to add
output — what to expose

Train the LSTM, then play the generated melody to see the gates pulse.

4. Generated Melody

1.0
32

Generated sequence

Piano roll

Why does LSTM beat vanilla RNN here?

Music has long-range structure: the chord progression that resolves at the end depends on the key established many notes earlier. A vanilla RNN's hidden state is overwritten with each new note, so it forgets early context. The LSTM's cell state persists through gating: the forget gate selectively erases stale info, the input gate writes new info, and the output gate decides what to expose. Watch the gates during playback — you'll often see the forget gate clear at phrase boundaries.

← Back to course