Training multi-layer spiking neural networks using NormAD based spatio-temporal error backpropagation

Document Type

Article

Publication Date

3-7-2020

Abstract

Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multi-layer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to generate desired spike trains in response to a given input. To tackle this, first the problem of training a multi-layer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next, it is reformulated to efficiently train multi-layer SNNs, and is shown to be effectively performing spatio-temporal error backpropagation. The learning rule is validated by training 2-layer SNNs to solve a spike based formulation of the XOR problem as well as training 3-layer SNNs for generic spike based training problems. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of efficient event-triggered learning.

Identifier

85075465299 (Scopus)

Publication Title

Neurocomputing

External Full Text Location

https://doi.org/10.1016/j.neucom.2019.10.104

e-ISSN

18728286

ISSN

09252312

First Page

67

Last Page

77

Volume

380

Grant

1710009

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS