GATED GRAPH SEQUENCE NEURAL NETWORKS

GATED GRAPH SEQUENCE NEURAL NETWORKS

22 Sep 2017 | Yujia Li & Richard Zemel, Marc Brockschmidt & Daniel Tarlow
Gated Graph Sequence Neural Networks (GGS-NNs) are a neural network model designed for graph-structured data. The model extends Graph Neural Networks (GNNs) by incorporating gated recurrent units and enabling sequence output. This allows the model to handle tasks that require sequential outputs, such as paths on a graph or sequences of classifications. The model is trained using backpropagation through time and can be adapted to various tasks, including program verification and graph algorithm learning. The paper introduces GGS-NNs as a flexible and powerful model for graph-structured data. It demonstrates the model's effectiveness on tasks such as the bAbI dataset and program verification. In program verification, the model is used to infer logical formulas from graph representations of memory states, which is crucial for proving properties like memory safety. The model is trained on graph-structured data, where nodes represent memory addresses and edges represent pointers. The model uses node annotations to track which nodes have been processed, enabling it to generate logical formulas that describe the data structures in the memory. The model is evaluated on various tasks, including graph algorithm learning and program verification, showing its effectiveness in handling complex graph-structured problems. The paper also discusses related work, including other graph-based models and neural network architectures. It highlights the advantages of GGS-NNs, such as their ability to handle sequential outputs and their flexibility in various applications. The model's performance is compared to other approaches, demonstrating its effectiveness in tasks requiring graph-structured data processing. Overall, the paper presents GGS-NNs as a promising approach for handling graph-structured data and sequential outputs in a variety of applications.Gated Graph Sequence Neural Networks (GGS-NNs) are a neural network model designed for graph-structured data. The model extends Graph Neural Networks (GNNs) by incorporating gated recurrent units and enabling sequence output. This allows the model to handle tasks that require sequential outputs, such as paths on a graph or sequences of classifications. The model is trained using backpropagation through time and can be adapted to various tasks, including program verification and graph algorithm learning. The paper introduces GGS-NNs as a flexible and powerful model for graph-structured data. It demonstrates the model's effectiveness on tasks such as the bAbI dataset and program verification. In program verification, the model is used to infer logical formulas from graph representations of memory states, which is crucial for proving properties like memory safety. The model is trained on graph-structured data, where nodes represent memory addresses and edges represent pointers. The model uses node annotations to track which nodes have been processed, enabling it to generate logical formulas that describe the data structures in the memory. The model is evaluated on various tasks, including graph algorithm learning and program verification, showing its effectiveness in handling complex graph-structured problems. The paper also discusses related work, including other graph-based models and neural network architectures. It highlights the advantages of GGS-NNs, such as their ability to handle sequential outputs and their flexibility in various applications. The model's performance is compared to other approaches, demonstrating its effectiveness in tasks requiring graph-structured data processing. Overall, the paper presents GGS-NNs as a promising approach for handling graph-structured data and sequential outputs in a variety of applications.
Reach us at info@study.space
[slides and audio] Gated Graph Sequence Neural Networks