DECO: Dynamic Energy-aware Compression and Optimization for In-Memory Neural Networks

Document Type

Conference Proceeding

Publication Date

1-1-2024

Abstract

This paper introduces DECO, a framework that combines model compression and processing-in-memory (PIM) to improve the efficiency of neural networks on IoT devices. By integrating these technologies, DECO significantly reduces energy consumption and operational latency through optimized data movement and computation, demonstrating notable performance gains on CIFAR-10/100 datasets. The DECO learning framework significantly improved the performance of compressed network modules derived from MobileNetV1 and VGG16, with accuracy gains of 1.66 % and 0.41 %, respectively, on the intricate CIFAR-100 dataset. DECO outperforms the GPU implementation by a significant margin, demonstrating up to a two-order-of-magnitude increase in speed based on our experiment.

Identifier

85205027821 (Scopus)

ISBN

[9798350387179]

Publication Title

Midwest Symposium on Circuits and Systems

External Full Text Location

https://doi.org/10.1109/MWSCAS60917.2024.10658771

ISSN

15483746

First Page

1441

Last Page

1445

Grant

2216772

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS