GEFL: Extended Filtration Learning for Graph Classification

GEFL: Extended Filtration Learning for Graph Classification

December 9–12, 2022 | Simon Zhang, Soham Mukherjee, Tamal K. Dey
This paper introduces Extended Filtration Learning (GEFL) for graph classification, integrating extended persistence from topological data analysis into a supervised learning framework. Extended persistence captures global topological information, including connected components and cycles, through persistence barcodes and explicit cycle representatives. The model uses a readout function based on extended persistence to combine this information into graph representations. The algorithm employs a link-cut tree data structure and parallelism to achieve a 60x speedup over existing methods, making extended persistence feasible for machine learning. The model outperforms the WL graph isomorphism test and 0-dimensional barcodes in terms of expressivity, as it can represent arbitrarily long cycles. Experiments on real-world and synthetic datasets show that GEFL achieves higher accuracy than existing graph neural networks, particularly in distinguishing graphs with complex topological structures. The method is end-to-end differentiable, allowing for efficient training and inference. The paper also analyzes the computational complexity of extended persistence, demonstrating its efficiency and effectiveness in graph classification tasks.This paper introduces Extended Filtration Learning (GEFL) for graph classification, integrating extended persistence from topological data analysis into a supervised learning framework. Extended persistence captures global topological information, including connected components and cycles, through persistence barcodes and explicit cycle representatives. The model uses a readout function based on extended persistence to combine this information into graph representations. The algorithm employs a link-cut tree data structure and parallelism to achieve a 60x speedup over existing methods, making extended persistence feasible for machine learning. The model outperforms the WL graph isomorphism test and 0-dimensional barcodes in terms of expressivity, as it can represent arbitrarily long cycles. Experiments on real-world and synthetic datasets show that GEFL achieves higher accuracy than existing graph neural networks, particularly in distinguishing graphs with complex topological structures. The method is end-to-end differentiable, allowing for efficient training and inference. The paper also analyzes the computational complexity of extended persistence, demonstrating its efficiency and effectiveness in graph classification tasks.
Reach us at info@study.space
[slides] GEFL%3A Extended Filtration Learning for Graph Classification | StudySpace