Author ORCID Identifier

0009-0007-8031-661X

Document Type

Dissertation

Date of Award

5-31-2025

Degree Name

Doctor of Philosophy in Information Systems - (Ph.D.)

Department

Informatics

First Advisor

Michael J. Lee

Second Advisor

Hua Wei

Third Advisor

Yi-Fang Brook Wu

Fourth Advisor

Hai Nhat Phan

Fifth Advisor

Mark Cartwright

Sixth Advisor

Dongsheng Luo

Abstract

In the evolving landscape of artificial intelligence (AI), Graph Neural Networks (GNNs) have garnered growing prominence for their adeptness in processing graph-structured data. Despite this, the interpretability of their predictions often remains elusive. The demand for transparency and explainability in complex prediction models has reached unprecedented levels. To address this, post-hoc instance-level explanation techniques have emerged, aiming to unveil the rationale behind GNN predictions. These techniques endeavor to unearth substructures that elucidate the predictive behavior of trained GNNs.

This dissertation embarks on an exploration of Explainable AI (XAI) technologies within the realm of GNNs. Amid the challenges posed by the distribution shifting problem and the gap from classification tasks to regression tasks, as well as studying the confidence of the explanations, this research endeavors to make GNNs to be more transparent and interpretable tools.

The motivation stems from a dual mandate: the necessity for interpretability in AI systems and the potential of GNNs to decipher complex relationships inherent in graph data and be employed in many real-life applications. Delving into the intricacies of GNN decision-making, the research aims to not only address challenges but also spearhead advancements in Explainable AI for Graph Neural Networks (XAIG). This journey encompasses strategies to mitigate the impact of the distribution shifting problem in existing works, adapt GNNs from classification tasks to regression tasks, and quantify the confidence or uncertainty of explanations. The goal is to establish the XAIG technology as a mature, dependable, and extensively employed tool in current GNN applications, thus enhancing the overall utilization and effectiveness of GNNs.

By establishing a foundation for transparent and explainable GNNs, this dissertation bridges the gap between advanced AI methodologies and human comprehension. The culmination of this research promises enhanced reliability in GNN predictions while contributing to the broader discourse on AI ethics and accountability. In a future where AI decisions are credible, accessible, and aligned with societal values, XAIG emerges as a cornerstone, harmonizing AI's computational prowess with human reasoning.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.