MixupExplainer: Generalizing Explanations for Graph Neural Networks with Data Augmentation

Document Type

Conference Proceeding

Publication Date

8-4-2023

Abstract

Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data. However, their predictions are often not interpretable. Post-hoc instance-level explanation methods have been proposed to understand GNN predictions. These methods seek to discover substructures that explain the prediction behavior of a trained GNN. In this paper, we shed light on the existence of the distribution shifting issue in existing methods, which affects explanation quality, particularly in applications on real-life datasets with tight decision boundaries. To address this issue, we introduce a generalized Graph Information Bottleneck (GIB) form that includes a label-independent graph variable, which is equivalent to the vanilla GIB. Driven by the generalized GIB, we propose a graph mixup method, MixupExplainer, with a theoretical guarantee to resolve the distribution shifting issue. We conduct extensive experiments on both synthetic and real-world datasets to validate the effectiveness of our proposed mixup approach over existing approaches. We also provide a detailed analysis of how our proposed approach alleviates the distribution shifting issue.

Identifier

85171382081 (Scopus)

ISBN

[9798400701030]

Publication Title

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

External Full Text Location

https://doi.org/10.1145/3580305.3599435

ISSN

2154817X

First Page

3286

Last Page

3296

Grant

2153311

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS