Document Type
Thesis
Date of Award
8-30-2022
Degree Name
Master of Science in Computer Science - (M.S.)
Department
Computer Science
First Advisor
David A. Bader
Second Advisor
Ioannis Koutis
Third Advisor
Zhihui Du
Abstract
Graph data structures provide a unique challenge for both analysis and algorithm development. These data structures are irregular in that memory accesses are not known a priori and accesses to these structures tend to lack locality.
Despite these challenges, graph data structures are a natural way to represent relationships between entities and to exhibit unique features about these relationships. The network created from these relationships can create unique local structures that can describe the behavior between members of these structures. Graphs can be analyzed in a number of different ways including at a high level in community detection and at the node level in centrality. Both of these are difficult to quantitatively define because a “correct” answer is not readily apparent. The centrality of a node can be subjective; what does it mean central in an amorphous data structure? Further, even when centrality or community detection can be defined, there are typically trade offs in detection and analysis. A fine grained method may yield a more precise method but the run time may scale exponentially or even beyond. For small datasets this may not be a concern but for graph datasets this can make analysis prohibitive considering a social media networks where there are millions of people with millions of connections. Based on these two criteria, we implement several versions of a recently designed centrality measure called Triangle Centrality which is a centrality metric that considers both connectivity of a node with other nodes and the connectivity of a node’s neighbors. The connectivity is aptly measured through the triangles formed by nodes. There are two ways to implement triangle centrality; a graph based approach and an approach that utilizes linear algebra and matrix operations. This implementation is done with graph based data structures and to optimize this, we implement several versions of triangle counting based on prior research into the high performance computing framework, Arkouda. We implement an edge list intersection, a minimized search kernel method, a path merge method, and a small set intersection method. To compare these methods, we include a naive method and a comparison to a linear algebra implementation that uses the SuiteSparse GraphBLAS library.
Our implementation utilizes an open-source framework called Arkouda which is a distributed platform for data scientists and developers. It simplifies complex parallel algorithms and the storage of datasets onto a back end Chapel server and allows users to access these from an intuitive pythonic interface. Our results demonstrate the scalability of the platform and are analyzed against different graph properties to see how these affect the implementation.
Recommended Citation
Patchett, Joseph Thomas, "Efficient and scalable triangle centrality algorithms in the arkouda framework" (2022). Theses. 2024.
https://digitalcommons.njit.edu/theses/2024