Deep Neural Networks for Learning Graph Representations

Deep Neural Networks for Learning Graph Representations

2016 | Shaosheng Cao, Wei Lu, Qiongkai Xu
This paper introduces a novel model for learning graph representations, which generates low-dimensional vector representations for each vertex by capturing graph structural information. Unlike previous methods that use sampling-based approaches, the proposed model employs a random surfing model to directly capture graph structural information. The authors argue that this approach offers advantages in both theoretical and empirical perspectives. They also revisit the matrix factorization method used in Levy and Goldberg (2014), suggesting that the pointwise mutual information (PMI) matrix can be considered as an analytical solution to the objective function of the skip-gram model with negative sampling. Instead of using singular value decomposition (SVD) for dimension reduction, the model introduces stacked denoising autoencoders to extract complex features and model non-linearities. Experiments on clustering and visualization tasks demonstrate that the proposed model outperforms other state-of-the-art models in various datasets. The main contributions of the paper are twofold: theoretically, it shows that deep neural networks can capture non-linear information more effectively than conventional linear dimension reduction methods; empirically, it demonstrates that the model learns better low-dimensional vertex representations for weighted graphs, which can be effectively used for downstream tasks.This paper introduces a novel model for learning graph representations, which generates low-dimensional vector representations for each vertex by capturing graph structural information. Unlike previous methods that use sampling-based approaches, the proposed model employs a random surfing model to directly capture graph structural information. The authors argue that this approach offers advantages in both theoretical and empirical perspectives. They also revisit the matrix factorization method used in Levy and Goldberg (2014), suggesting that the pointwise mutual information (PMI) matrix can be considered as an analytical solution to the objective function of the skip-gram model with negative sampling. Instead of using singular value decomposition (SVD) for dimension reduction, the model introduces stacked denoising autoencoders to extract complex features and model non-linearities. Experiments on clustering and visualization tasks demonstrate that the proposed model outperforms other state-of-the-art models in various datasets. The main contributions of the paper are twofold: theoretically, it shows that deep neural networks can capture non-linear information more effectively than conventional linear dimension reduction methods; empirically, it demonstrates that the model learns better low-dimensional vertex representations for weighted graphs, which can be effectively used for downstream tasks.
Reach us at info@study.space