Speakers - 2026

Nanomaterials Conference
Moqadasi Hamideh
University of Tehran, Iran (Islamic Republic of Iran)
Title: Temporal single spike coding and learning for effective transfer learning

Abstract

Recent advancements in machine learning have outperformed traditional statistical methods, particularly through neural networks. Spiking neural networks (SNNs), as the third generation of neural networks, mimic the dynamics of biological neurons, providing greater biological realism and efficiency, especially in processing spatiotemporal visual data. In SNNs, neurons communicate through action potentials or spikes; however, the discrete nature of spikes complicates the training of deep networks due to the non-differentiability of their activation functions. Common coding strategies are rate and temporal coding, while rate-based methods require many spikes over extended time steps. In this work, we propose a direct supervised learning method based on backpropagation that leverages sparse single-spike coding. In this approach, temporal information is encoded in the timing of spike trains, with the most strongly activated neurons firing first, while fewer activated neurons either fire later or not at all. For the first time, a single fixed spike is designated as the target. We consider that only the output corresponding to the correct class fires, while all other outputs remain silent. This results in a fully single-spike error backpropagation learning rule, where the coding in the inputs, hidden layers, outputs, and even the target consists of single spikes. This approach generally reduces the number of required spikes and lowers learning costs, including computation time and power. In the first scenario, we applied our learning rule to multi-layer perceptron networks, both shallow and deep, for a classification task. We then utilized it in a transfer learning system, training the SNN as the classifier block. This demonstrated its effectiveness even with limited labelled data, utilizing extracted features. This approach is evaluated on four datasets: Eth80, MNIST, Fashion-MNIST, and Caltech101-Face/Bike. The test data yielded state-of-the-art classification accuracies of 98.91%, 98.45%, 91.89%, and 97.75%, respectively. We also present a mathematical formulation for learning convolutional neural networks (CNNs) using our new approach. All models are simulated using the PyTorch library. This coding scheme represents an advance in SNN training methodologies, reducing spike counts and enhancing performance across various tasks. Related papers are currently in the publication process with reputable journals.

 

The audience take away from presentation:

  • A deep understanding of how temporal single-spike coding can significantly improve the efficiency and biological realism of spiking neural networks (SNNs).
  • Insights into a novel supervised learning rule that performs error backpropagation using single spikes, reducing both computational cost and power consumption.
  • Practical guidance on applying single-spike learning to both shallow and deep architectures, achieving state-of-the-art classification accuracy across multiple datasets.
  • Knowledge of how transfer learning can be effectively integrated with SNNs, allowing high performance even with limited labeled data.
  • Inspiration for researchers and engineers to develop low-power, neuromorphic AI hardware and extend these ideas to CNN-based SNN frameworks for real-world intelligent systems.