GEOM-GCN: GEOMETRIC GRAPH CONVOLUTIONAL NETWORKS Message-passing neural networks (MPNNs) have been successfully applied to representation learning on graphs in various real-world applications. However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs. To address these issues, the authors propose a novel geometric aggregation scheme for graph neural networks. The scheme maps a graph to a continuous latent space and uses geometric relationships in the latent space to build structural neighborhoods for aggregation. It also includes a bi-level aggregator that updates node features while maintaining permutation invariance. The proposed scheme is implemented in graph convolutional networks as Geom-GCN, which performs transductive learning on graphs. Experimental results show that Geom-GCN achieves state-of-the-art performance on a wide range of open datasets of graphs. The geometric aggregation scheme consists of three modules: node embedding, structural neighborhood, and bi-level aggregation. Node embedding maps nodes to a latent continuous space, structural neighborhood defines the set of nodes in the graph and latent space, and bi-level aggregation updates node features using geometric relationships. The scheme is permutation-invariant and can capture long-range dependencies in disassortative graphs by using geometric relationships in the latent space. The authors compare their approach with existing methods and show that Geom-GCN outperforms them on various graph datasets. They also analyze the performance of different embedding methods and show that combining different embedding spaces can improve performance. The authors also evaluate the time complexity of Geom-GCN and find that it is more computationally intensive than GCN and GAT. Finally, they visualize the feature representations of nodes in the Cora dataset and show that Geom-GCN learns graph hierarchy through Poincaré embedding. The authors conclude that their approach effectively addresses the two major weaknesses of existing message-passing neural networks over graphs.GEOM-GCN: GEOMETRIC GRAPH CONVOLUTIONAL NETWORKS Message-passing neural networks (MPNNs) have been successfully applied to representation learning on graphs in various real-world applications. However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs. To address these issues, the authors propose a novel geometric aggregation scheme for graph neural networks. The scheme maps a graph to a continuous latent space and uses geometric relationships in the latent space to build structural neighborhoods for aggregation. It also includes a bi-level aggregator that updates node features while maintaining permutation invariance. The proposed scheme is implemented in graph convolutional networks as Geom-GCN, which performs transductive learning on graphs. Experimental results show that Geom-GCN achieves state-of-the-art performance on a wide range of open datasets of graphs. The geometric aggregation scheme consists of three modules: node embedding, structural neighborhood, and bi-level aggregation. Node embedding maps nodes to a latent continuous space, structural neighborhood defines the set of nodes in the graph and latent space, and bi-level aggregation updates node features using geometric relationships. The scheme is permutation-invariant and can capture long-range dependencies in disassortative graphs by using geometric relationships in the latent space. The authors compare their approach with existing methods and show that Geom-GCN outperforms them on various graph datasets. They also analyze the performance of different embedding methods and show that combining different embedding spaces can improve performance. The authors also evaluate the time complexity of Geom-GCN and find that it is more computationally intensive than GCN and GAT. Finally, they visualize the feature representations of nodes in the Cora dataset and show that Geom-GCN learns graph hierarchy through Poincaré embedding. The authors conclude that their approach effectively addresses the two major weaknesses of existing message-passing neural networks over graphs.