Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec

Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec

8 Feb 2018 | Jiezongh Qiu†*, Yuxiao Dong†, Hao Ma†, Jian Li‡, Kuansan Wang†, and Jie Tang†
This paper presents a theoretical analysis of four popular network embedding methods: DeepWalk, LINE, PTE, and node2vec. The authors show that all these methods can be unified into a matrix factorization framework with closed-form solutions. They demonstrate that DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix, LINE is a special case of DeepWalk when the context size is set to one, PTE is a joint factorization of multiple networks' Laplacians, and node2vec factorizes a matrix related to the stationary distribution and transition probability tensor of a second-order random walk. The authors also provide theoretical connections between skip-gram based network embedding algorithms and graph Laplacian theory. They propose the NetMF method for network embedding, which achieves significant improvements over DeepWalk and LINE for conventional network mining tasks. The work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.This paper presents a theoretical analysis of four popular network embedding methods: DeepWalk, LINE, PTE, and node2vec. The authors show that all these methods can be unified into a matrix factorization framework with closed-form solutions. They demonstrate that DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix, LINE is a special case of DeepWalk when the context size is set to one, PTE is a joint factorization of multiple networks' Laplacians, and node2vec factorizes a matrix related to the stationary distribution and transition probability tensor of a second-order random walk. The authors also provide theoretical connections between skip-gram based network embedding algorithms and graph Laplacian theory. They propose the NetMF method for network embedding, which achieves significant improvements over DeepWalk and LINE for conventional network mining tasks. The work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.
Reach us at info@study.space
[slides and audio] Network Embedding as Matrix Factorization%3A Unifying DeepWalk%2C LINE%2C PTE%2C and node2vec