Skip to main content
Spiking Neural Network Systems

Probabilistic LIF Neurons Improve Learning in Recurrent Spiking Neural Networks

Authors
  • Sebastian Higuchi (Universität zu Lübeck)
  • Niels A. Kloosterman (Universität zu Lübeck)
  • Stefan Hallermann (Universität Leipzig)
  • Sebastian Otte (University of Lübeck)

Abstract

Training recurrent spiking neural networks (SNNs) with leaky integrate-and-fire (LIF) neurons is often slow, particularly during the early phase, when networks must first establish sufficient spike activity to form patterns.
Strategies such as low firing thresholds or high-magnitude weight initialization can increase early spiking, but typically introduce instabilities and impair learning.
Here we introduce a modification of classical LIF and parameterized LIF (PLIF) neurons, in which spikes are generated probabilistically, including a proper surrogate gradient formulation. The membrane potential parameterizes the instantaneous spike probability, and spikes are sampled as Bernoulli variables at each time step, whereas underlying LIF membrane dynamics remain unchanged.
This stochastic activation stabilizes early spike activity and substantially accelerates learning.
In two benchmark tasks, these probabilistic LIF networks surprisingly achieve substantially higher classification accuracy than its deterministic LIF baselines. 
These findings suggest that probabilistic spike generation may provide a promising new perspective for building compact and effective spiking architectures.

How to Cite:

Higuchi, S., Kloosterman, N., Hallermann, S. & Otte, S., (2026) “Probabilistic LIF Neurons Improve Learning in Recurrent Spiking Neural Networks”, Proceedings of the Austrian Symposium on AI, Robotics, and Vision 3(1), 259-263.

Downloads:
Download PDF

10 Views

2 Downloads

Published on
2026-04-10

Peer Reviewed