Effective Online SNN Training with One-Step Backpropagation
Abstract
Backpropagation through time (BPTT) remains the gold-standard for training recurrent spiking neural networks, but its need to store long temporal computation graphs makes it memory-intensive and incompatible with strict online updates. This has motivated a range of alternative online learning rules, such as e-prop, further trace-based methods, and forward-only approximations, which reduce sequence-length-dependent overhead but typically require custom implementations and often sacrifice task performance. In this work, we revisit the simplest possible alternative: truncated BPTT with truncation length k = 1 (tBPTT1). Although this setting is usually regarded as an overly limited-horizon baseline with poor temporal credit assignment, we show that it is a widely underestimated learning strategy. In a standard surrogate gradient learning setup, tBPTT1 achieves performance competitive with or better than more sophisticated online learning rules. Our experiments identify two key ingredients for this result: a substantially smaller learning rate than commonly used and an optimizer with slow temporal averaging through its momentum statistics. These findings suggest that, for many practical spiking network settings, elaborate online credit-assignment rules may not be necessary: plain one-step backprop, when paired with appropriate optimization, appears as an overlooked training strategy provides effective, memory-efficient, and implementation-friendly learning.
How to Cite:
Higuchi, S., Corradi, F., Bohté, S. & Otte, S., (2026) “Effective Online SNN Training with One-Step Backpropagation”, Proceedings of the Austrian Symposium on AI, Robotics, and Vision 3(1), 253-258.
Downloads:
Download PDF
10 Views
1 Downloads