The paper "Text-Free Multi-domain Graph Pre-training: Toward Graph Foundation Models" addresses the challenge of training a graph foundation model on a broad range of graph data from diverse domains. The authors propose MDGPT, a text-free multi-domain graph pre-training and adaptation framework designed to exploit multi-domain knowledge for graph learning. MDGPT introduces *domain tokens* to align features across source domains and *dual prompts* (a unifying prompt and a mixing prompt) to adapt the target domain with unified multi-domain knowledge and tailored domain-specific knowledge. Extensive experiments on six public datasets demonstrate that MDGPT outperforms prior art by up to 37.9%. The contributions of the work include proposing MDGPT, designing domain tokens for feature alignment, and introducing dual prompts for downstream adaptation. The paper also discusses related work, provides preliminaries, and presents experimental results, including one-shot performance evaluation and model insights.The paper "Text-Free Multi-domain Graph Pre-training: Toward Graph Foundation Models" addresses the challenge of training a graph foundation model on a broad range of graph data from diverse domains. The authors propose MDGPT, a text-free multi-domain graph pre-training and adaptation framework designed to exploit multi-domain knowledge for graph learning. MDGPT introduces *domain tokens* to align features across source domains and *dual prompts* (a unifying prompt and a mixing prompt) to adapt the target domain with unified multi-domain knowledge and tailored domain-specific knowledge. Extensive experiments on six public datasets demonstrate that MDGPT outperforms prior art by up to 37.9%. The contributions of the work include proposing MDGPT, designing domain tokens for feature alignment, and introducing dual prompts for downstream adaptation. The paper also discusses related work, provides preliminaries, and presents experimental results, including one-shot performance evaluation and model insights.