Date of Award

Spring 2019

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Electrical Engineering - (Ph.D.)

Department

Electrical and Computer Engineering

First Advisor

Simeone, Osvaldo

Second Advisor

Rajendran, Bipin

Third Advisor

Haimovich, Alexander

Fourth Advisor

Abdi, Ali

Fifth Advisor

Wei, Zhi

Abstract

Spiking Neural Networks (SNNs), or third-generation neural networks, are networks of computation units, called neurons, in which each neuron with internal analogue dynamics receives as input and produces as output spiking, that is, binary sparse, signals. In contrast, second-generation neural networks, termed as Artificial Neural Networks (ANNs), rely on simple static non-linear neurons that are known to be energy-intensive, hindering their implementations on energy-limited processors such as mobile devices. The sparse event-based characteristics of SNNs for information transmission and encoding have made them more feasible for highly energy-efficient neuromorphic computing architectures. The most existing training algorithms for SNNs are based on deterministic spiking neurons that limit their flexibility and expressive power. Moreover, the SNNs are typically trained based on the back-propagation method, which unlike ANNs, it becomes challenging due to the non-differentiability nature of the spike dynamics. Considering these two key issues, this dissertation is devoted to develop probabilistic frameworks for SNNs that are tailored to the solution of supervised and unsupervised cognitive tasks. The SNNs utilize rich model, flexible and computationally tractable properties of Generalized Linear Model (GLM) neuron. The GLM is a probabilistic neural model that was previously considered within the computational neuroscience literature. A novel training method is proposed for the purpose of classification with a first-to-spike decoding rule, whereby the SNN can perform an early classification decision once spike firing is detected at an output neuron. This method is in contrast with conventional classification rules for SNNs that operate offline based on the number of output spikes at each output neuron. As a result, the proposed method improves the accuracy-inference complexity trade-off with respect to conventional decoding. For the first time in the field, the sensitivity of SNNs trained via Maximum Likelihood (ML) is studied under white-box adversarial attacks. Rate and time encoding, as well as rate and first-to-spike decoding, are considered. Furthermore, a robust training mechanism is proposed that is demonstrated to enhance the resilience of SNNs under adversarial examples. Finally, unsupervised training task for probabilistic SNNs is studied. Under generative model framework, multi-layers SNNs are designed for both encoding and generative parts. In order to train the Variational Autoencoders (VAEs), the standard ML approach is considered. To tackle the intractable inference part, variational learning approaches including doubly stochastic gradient learning, Maximum A Posterior (MAP)-based, and Rao-Blackwellization (RB)-based are considered. The latter is referred as the Hybrid Stochastic-MAP Variational Learning (HSM-VL) scheme. The numerical results show performance improvements using the HSM-VL method compared to the other two training schemes.

Share

COinS