LiGNN: Graph Neural Networks at LinkedIn

LiGNN: Graph Neural Networks at LinkedIn

August 25–29, 2024, Barcelona, Spain | Fedor Borisyuk, Shihai He, Yunbo Ouyang, Morteza Ramezani, Peng Du, Xiaochen Hou, Chengming Jiang, Nitin Pasumarthy, Priya Bannur, Birjodh Tiwana, Ping Liu, Siddharth Dangi, Daqi Sun, Zhoutao Pei, Xiao Shi, Sirou Zhu, Qianqi Shen, Kuang-Hsuan Lee, David Stein*, Baolei Li*, Haichao Wei, Amol Ghoting, Souvik Ghosh
This paper presents *LiGNN*, a large-scale Graph Neural Networks (GNNs) framework deployed at LinkedIn. The authors share their insights and experiences in developing and deploying GNNs at LinkedIn, focusing on algorithmic improvements for GNN representation learning. Key contributions include temporal graph architectures with long-term losses, effective cold start solutions via graph densification, ID embeddings, and multi-hop neighbor sampling. They detail how they accelerated large-scale training by 7x using adaptive neighbor sampling, grouping and slicing of training data batches, and specialized shared-memory queues. The techniques have led to significant improvements in various applications, such as job application hearing back rates, Ads CTR lift, and daily active users engaged in feed recommendations. The paper also discusses the challenges of GNN training at scale, handling diverse entities, addressing cold start issues, and managing a dynamic system. The authors share deployment lessons and learnings from A/B test experiments, providing practical solutions and insights for applying GNNs at large scale.This paper presents *LiGNN*, a large-scale Graph Neural Networks (GNNs) framework deployed at LinkedIn. The authors share their insights and experiences in developing and deploying GNNs at LinkedIn, focusing on algorithmic improvements for GNN representation learning. Key contributions include temporal graph architectures with long-term losses, effective cold start solutions via graph densification, ID embeddings, and multi-hop neighbor sampling. They detail how they accelerated large-scale training by 7x using adaptive neighbor sampling, grouping and slicing of training data batches, and specialized shared-memory queues. The techniques have led to significant improvements in various applications, such as job application hearing back rates, Ads CTR lift, and daily active users engaged in feed recommendations. The paper also discusses the challenges of GNN training at scale, handling diverse entities, addressing cold start issues, and managing a dynamic system. The authors share deployment lessons and learnings from A/B test experiments, providing practical solutions and insights for applying GNNs at large scale.
Reach us at info@study.space
Understanding LiGNN%3A Graph Neural Networks at LinkedIn