Novel Pruning of Dendritic Neuron Models for Improved System Implementation and Performance
Document Type
Conference Proceeding
Publication Date
1-1-2021
Abstract
Pruning is widely used for neural network model compression. It removes redundant links from a weight tensor to lead to smaller and more efficient neural networks for system implementation. A compressed neural network can enable faster run and reduced computational cost in network training. In this paper, a novel pruning method is proposed for a dendritic neuron model (DNM). It calculates the significance of each DNM dendrite. The calculated significance is expressed numerically and a dendrite whose significance is lower than a pre-set threshold is removed. Experimental results verify that it obtains superior performance over the existing one in terms of both accuracy and computational efficiency.
Identifier
85124305160 (Scopus)
ISBN
[9781665442077]
Publication Title
Conference Proceedings IEEE International Conference on Systems Man and Cybernetics
External Full Text Location
https://doi.org/10.1109/SMC52423.2021.9659103
ISSN
1062922X
First Page
1559
Last Page
1564
Recommended Citation
Wen, Xiaohao; Zhou, Mengchu; Luo, Xudong; Huang, Lukui; and Wang, Ziyue, "Novel Pruning of Dendritic Neuron Models for Improved System Implementation and Performance" (2021). Faculty Publications. 4673.
https://digitalcommons.njit.edu/fac_pubs/4673