This paper explores the integration of negative examples in fine-tuning large language models (LLMs) for agent tasks, where LLMs interact with environments through tools. Traditional methods discard failed trajectories, leading to data wastage and limited optimization paths. The authors propose a Negative-Aware Training (NAT) paradigm, which adds prefixes or suffixes to trajectories to indicate whether they are positive or negative, allowing LLMs to learn from both successful and unsuccessful interactions. Experiments on mathematical reasoning, multi-hop question answering, and strategic question answering tasks show that NAT significantly improves model performance compared to methods that only use positive examples or naively combine positive and negative examples. The paper also analyzes the effectiveness of negative examples, finding that the quality of negative data is crucial for success. NAT is shown to be effective in various agent frameworks and reasoning strategies, including Chain-of-Thought prompting. The findings highlight the value of negative examples in agent tuning and provide guidance for developing better agent-tuning methods.This paper explores the integration of negative examples in fine-tuning large language models (LLMs) for agent tasks, where LLMs interact with environments through tools. Traditional methods discard failed trajectories, leading to data wastage and limited optimization paths. The authors propose a Negative-Aware Training (NAT) paradigm, which adds prefixes or suffixes to trajectories to indicate whether they are positive or negative, allowing LLMs to learn from both successful and unsuccessful interactions. Experiments on mathematical reasoning, multi-hop question answering, and strategic question answering tasks show that NAT significantly improves model performance compared to methods that only use positive examples or naively combine positive and negative examples. The paper also analyzes the effectiveness of negative examples, finding that the quality of negative data is crucial for success. NAT is shown to be effective in various agent frameworks and reasoning strategies, including Chain-of-Thought prompting. The findings highlight the value of negative examples in agent tuning and provide guidance for developing better agent-tuning methods.