GraphInstruct is a benchmark designed to evaluate and enhance the graph understanding and reasoning capabilities of large language models (LLMs). It includes 21 classical graph reasoning tasks with diverse graph generation pipelines and detailed reasoning steps. Based on GraphInstruct, the authors propose GraphLM, an LLM with strong graph understanding capabilities, and GraphLM+, which further improves reasoning ability through a step mask training strategy. Extensive experiments show that GraphLM and GraphLM+ outperform other LLMs in graph reasoning tasks. The benchmark provides a comprehensive set of tasks covering various graph structures, sizes, and descriptions, enabling LLMs to better understand and reason about graph data. The study highlights the importance of graph reasoning in general intelligence and suggests that further research is needed to improve LLMs' ability to handle complex graph tasks. The code for generating GraphInstruct is publicly available.GraphInstruct is a benchmark designed to evaluate and enhance the graph understanding and reasoning capabilities of large language models (LLMs). It includes 21 classical graph reasoning tasks with diverse graph generation pipelines and detailed reasoning steps. Based on GraphInstruct, the authors propose GraphLM, an LLM with strong graph understanding capabilities, and GraphLM+, which further improves reasoning ability through a step mask training strategy. Extensive experiments show that GraphLM and GraphLM+ outperform other LLMs in graph reasoning tasks. The benchmark provides a comprehensive set of tasks covering various graph structures, sizes, and descriptions, enabling LLMs to better understand and reason about graph data. The study highlights the importance of graph reasoning in general intelligence and suggests that further research is needed to improve LLMs' ability to handle complex graph tasks. The code for generating GraphInstruct is publicly available.