AMOR is a modular knowledge agent that leverages open-source large language models (LLMs) to perform reasoning and adaptation through process feedback. The agent uses a finite state machine (FSM) to structure its reasoning logic, enabling it to solve problems by executing and transitioning between disentangled modules. This framework allows human feedback to be directly applied to individual modules, facilitating process-based supervision. AMOR is developed through two-stage fine-tuning: a warm-up phase that generalizes across different knowledge environments using examples from public datasets, and an adaptation phase that tailors the agent to specific domains using process feedback. Extensive experiments across multiple domains demonstrate that AMOR outperforms strong baselines, thanks to its FSM-based reasoning and process feedback mechanism. The code and data are publicly available at https://github.com/JianGuanTHU/AMOR. The agent's modular design enables efficient handling of multiple tasks and allows for flexible feedback mechanisms, either binary judgments or refined outputs. AMOR's reasoning logic and process feedback mechanism together define how the agent thinks, acts, and interacts with users and task environments. The agent is trained using a two-stage fine-tuning strategy, with the warm-up stage constructing training data for each module separately, and the adaptation stage using process feedback to refine the agent's performance. The results show that process feedback is more effective than outcome feedback in improving the reasoning process, and that AMOR's FSM-based reasoning logic provides a structured and adaptable framework for knowledge-intensive tasks.AMOR is a modular knowledge agent that leverages open-source large language models (LLMs) to perform reasoning and adaptation through process feedback. The agent uses a finite state machine (FSM) to structure its reasoning logic, enabling it to solve problems by executing and transitioning between disentangled modules. This framework allows human feedback to be directly applied to individual modules, facilitating process-based supervision. AMOR is developed through two-stage fine-tuning: a warm-up phase that generalizes across different knowledge environments using examples from public datasets, and an adaptation phase that tailors the agent to specific domains using process feedback. Extensive experiments across multiple domains demonstrate that AMOR outperforms strong baselines, thanks to its FSM-based reasoning and process feedback mechanism. The code and data are publicly available at https://github.com/JianGuanTHU/AMOR. The agent's modular design enables efficient handling of multiple tasks and allows for flexible feedback mechanisms, either binary judgments or refined outputs. AMOR's reasoning logic and process feedback mechanism together define how the agent thinks, acts, and interacts with users and task environments. The agent is trained using a two-stage fine-tuning strategy, with the warm-up stage constructing training data for each module separately, and the adaptation stage using process feedback to refine the agent's performance. The results show that process feedback is more effective than outcome feedback in improving the reasoning process, and that AMOR's FSM-based reasoning logic provides a structured and adaptable framework for knowledge-intensive tasks.