GEFL: Extended Filtration Learning for Graph Classification

GEFL: Extended Filtration Learning for Graph Classification

4 Jun 2024 | Simon Zhang, Soham Mukherjee, Tamal K. Dey
The paper "GEFL: Extended Filtration Learning for Graph Classification" introduces a novel approach to graph classification by incorporating extended persistence, a technique from topological data analysis, into a supervised learning framework. Extended persistence captures global multiscale topological information, including connected components and cycles, in the form of barcodes. The authors develop an efficient algorithm for computing extended persistence, achieving a speedup of over 60x compared to state-of-the-art methods, making it feasible for machine learning tasks. The model is end-to-end differentiable and uses a link-cut tree data structure and parallelism to reduce computational complexity. The paper demonstrates that extended persistence can surpass both the Weisfeiler-Lehman (WL) graph isomorphism test and 0-dimensional barcodes in terms of expressivity, as it adds more global topological information. Specifically, it can represent arbitrarily long cycles, which is challenging for finite receptive field message passing graph neural networks (GNNs). Experimental results on real-world datasets show that the proposed method outperforms existing graph representation learning methods, particularly in distinguishing graphs that other methods struggle with. The paper also includes a detailed analysis of the expressive power of extended persistence and provides ablation studies to validate the effectiveness of the proposed readout function.The paper "GEFL: Extended Filtration Learning for Graph Classification" introduces a novel approach to graph classification by incorporating extended persistence, a technique from topological data analysis, into a supervised learning framework. Extended persistence captures global multiscale topological information, including connected components and cycles, in the form of barcodes. The authors develop an efficient algorithm for computing extended persistence, achieving a speedup of over 60x compared to state-of-the-art methods, making it feasible for machine learning tasks. The model is end-to-end differentiable and uses a link-cut tree data structure and parallelism to reduce computational complexity. The paper demonstrates that extended persistence can surpass both the Weisfeiler-Lehman (WL) graph isomorphism test and 0-dimensional barcodes in terms of expressivity, as it adds more global topological information. Specifically, it can represent arbitrarily long cycles, which is challenging for finite receptive field message passing graph neural networks (GNNs). Experimental results on real-world datasets show that the proposed method outperforms existing graph representation learning methods, particularly in distinguishing graphs that other methods struggle with. The paper also includes a detailed analysis of the expressive power of extended persistence and provides ablation studies to validate the effectiveness of the proposed readout function.
Reach us at info@study.space
Understanding GEFL%3A Extended Filtration Learning for Graph Classification