Computational memory-based inference and training of deep neural networks
Document Type
Conference Proceeding
Publication Date
6-1-2019
Abstract
In-memory computing is an emerging computing paradigm where certain computational tasks are performed in place in a computational memory unit by exploiting the physical attributes of the memory devices. Here, we present an overview of the application of in-memory computing in deep learning, a branch of machine learning that has significantly contributed to the recent explosive growth in artificial intelligence. The methodology for both inference and training of deep neural networks is presented along with experimental results using phase-change memory (PCM) devices.
Identifier
85073894390 (Scopus)
ISBN
[9784863487185]
Publication Title
IEEE Symposium on VLSI Circuits Digest of Technical Papers
External Full Text Location
https://doi.org/10.23919/VLSIC.2019.8778178
First Page
T168
Last Page
T169
Volume
2019-June
Grant
682675
Fund Ref
European Research Council
Recommended Citation
Sebastian, A.; Boybat, I.; Dazzi, M.; Giannopoulos, I.; Jonnalagadda, V.; Joshi, V.; Karunaratne, G.; Kersting, B.; Khaddam-Aljameh, R.; Nandakumar, S. R.; Petropoulos, A.; Piveteau, C.; Antonakopoulos, T.; Rajendran, B.; Gallo, M. Le; and Eleftheriou, E., "Computational memory-based inference and training of deep neural networks" (2019). Faculty Publications. 7578.
https://digitalcommons.njit.edu/fac_pubs/7578
