"IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph" by Mehrdad Morsali, Mahmoud Nazzal et al.
 

IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge

Document Type

Conference Proceeding

Publication Date

6-5-2023

Abstract

In this paper, we propose IMA-GNN as an In-Memory Accelerator for centralized and decentralized Graph Neural Network inference, explore its potential in both settings and provide a guideline for the community targeting flexible and efficient edge computation. Leveraging IMA-GNN, we first model the computation and communication latencies of edge devices. We then present practical case studies on GNN-based taxi demand and supply prediction and also adopt four large graph datasets to quantitatively compare and analyze centralized and decentralized settings. Our cross-layer simulation results demonstrate that on average, IMA-GNN in the centralized setting can obtain ∼790x communication speed-up compared to the decentralized GNN setting. However, the decentralized setting performs computation ∼1400x faster while reducing the power consumption per device. This further underlines the need for a hybrid semi-decentralized GNN approach.

Identifier

85163161678 (Scopus)

ISBN

[9798400701252]

Publication Title

Proceedings of the ACM Great Lakes Symposium on VLSI Glsvlsi

External Full Text Location

https://doi.org/10.1145/3583781.3590248

First Page

3

Last Page

8

Grant

1852375

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS