EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs

EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs

18 Nov 2019 | Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, Charles E. Leiserson
EvolveGCN is a method for learning graph representations in dynamic graphs, where the graph evolves over time. Unlike traditional graph neural networks (GNNs) that assume static graphs, EvolveGCN uses a recurrent neural network (RNN) to evolve the parameters of a graph convolutional network (GCN), enabling it to capture the temporal dynamics of the graph. This approach avoids the need for node embeddings and instead evolves the GCN parameters over time, making it more adaptable to changes in the node set. Two versions of EvolveGCN are proposed: one where the GCN parameters are treated as hidden states of the RNN (EvolveGCN-H), and another where they are treated as input/output of the RNN (EvolveGCN-O). The method is evaluated on tasks such as link prediction, edge classification, and node classification, and shows superior performance compared to existing approaches. The model is implemented using standard RNNs like GRU and LSTM, and the number of parameters does not increase with the number of time steps, making it efficient. The method is tested on various datasets, including synthetic and real-world graphs, and demonstrates effectiveness in capturing dynamic graph structures and temporal evolution. The results show that EvolveGCN outperforms other methods in most tasks, particularly in link prediction and edge classification, and is more effective in dynamic settings where the graph structure changes over time.EvolveGCN is a method for learning graph representations in dynamic graphs, where the graph evolves over time. Unlike traditional graph neural networks (GNNs) that assume static graphs, EvolveGCN uses a recurrent neural network (RNN) to evolve the parameters of a graph convolutional network (GCN), enabling it to capture the temporal dynamics of the graph. This approach avoids the need for node embeddings and instead evolves the GCN parameters over time, making it more adaptable to changes in the node set. Two versions of EvolveGCN are proposed: one where the GCN parameters are treated as hidden states of the RNN (EvolveGCN-H), and another where they are treated as input/output of the RNN (EvolveGCN-O). The method is evaluated on tasks such as link prediction, edge classification, and node classification, and shows superior performance compared to existing approaches. The model is implemented using standard RNNs like GRU and LSTM, and the number of parameters does not increase with the number of time steps, making it efficient. The method is tested on various datasets, including synthetic and real-world graphs, and demonstrates effectiveness in capturing dynamic graph structures and temporal evolution. The results show that EvolveGCN outperforms other methods in most tasks, particularly in link prediction and edge classification, and is more effective in dynamic settings where the graph structure changes over time.
Reach us at info@study.space