The paper introduces the Variational Graph Auto-Encoder (VGAE), a framework for unsupervised learning on graph-structured data. The VGAE model uses latent variables to learn interpretable representations for undirected graphs. It employs a graph convolutional network (GCN) as the encoder and an inner product decoder. The model demonstrates competitive performance on link prediction tasks in citation networks, particularly when node features are incorporated, which significantly improves predictive accuracy on various benchmark datasets. The inference model is parameterized by a two-layer GCN, and the generative model uses an inner product between latent variables to reconstruct the adjacency matrix. The model is optimized using the variational lower bound, and experiments show that adding input features enhances performance. Future work will focus on improving the prior distributions and generative models, as well as enhancing scalability with stochastic gradient descent.The paper introduces the Variational Graph Auto-Encoder (VGAE), a framework for unsupervised learning on graph-structured data. The VGAE model uses latent variables to learn interpretable representations for undirected graphs. It employs a graph convolutional network (GCN) as the encoder and an inner product decoder. The model demonstrates competitive performance on link prediction tasks in citation networks, particularly when node features are incorporated, which significantly improves predictive accuracy on various benchmark datasets. The inference model is parameterized by a two-layer GCN, and the generative model uses an inner product between latent variables to reconstruct the adjacency matrix. The model is optimized using the variational lower bound, and experiments show that adding input features enhances performance. Future work will focus on improving the prior distributions and generative models, as well as enhancing scalability with stochastic gradient descent.