28 Mar 2024 | Jing Wu*, Zhixin Lai*, Suiyao Chen*, Ran Tao, Pan Zhao, Naira Hovakimyan
The paper introduces an intelligent crop management system that integrates deep reinforcement learning (DRL), language models (LMs), and crop simulations via the Decision Support System for Agrotechnology Transfer (DSSAT). The system uses a deep Q-network (DQN) to train management policies that process state variables from the simulator as observations. A key innovation is converting these state variables into descriptive language, enabling the LM to understand states and explore optimal management practices. The LM demonstrates superior learning capabilities, achieving state-of-the-art performance in maize crop simulations in Florida and Zaragoza. The system shows a 49% improvement in economic profit and reduced environmental impact compared to baseline methods. The framework outperforms existing state-of-the-art approaches in crop yield, resource utilization, and environmental impact. The paper also explores the use of LMs in decision-making for crop management, highlighting their ability to process complex information and provide insights for better decisions. The results indicate that LM-based RL agents are effective in optimizing crop management strategies, adapting to different reward functions and environmental conditions. The study addresses the challenge of optimizing crop management to maximize yield while minimizing costs and environmental impacts, demonstrating the potential of LMs as expert agronomists in agricultural decision-making. The framework is designed to be adaptable and robust, capable of handling real-world uncertainties and measurement noise. The paper concludes that the proposed LM-based RL framework has significant potential for improving agricultural practices and sustainability.The paper introduces an intelligent crop management system that integrates deep reinforcement learning (DRL), language models (LMs), and crop simulations via the Decision Support System for Agrotechnology Transfer (DSSAT). The system uses a deep Q-network (DQN) to train management policies that process state variables from the simulator as observations. A key innovation is converting these state variables into descriptive language, enabling the LM to understand states and explore optimal management practices. The LM demonstrates superior learning capabilities, achieving state-of-the-art performance in maize crop simulations in Florida and Zaragoza. The system shows a 49% improvement in economic profit and reduced environmental impact compared to baseline methods. The framework outperforms existing state-of-the-art approaches in crop yield, resource utilization, and environmental impact. The paper also explores the use of LMs in decision-making for crop management, highlighting their ability to process complex information and provide insights for better decisions. The results indicate that LM-based RL agents are effective in optimizing crop management strategies, adapting to different reward functions and environmental conditions. The study addresses the challenge of optimizing crop management to maximize yield while minimizing costs and environmental impacts, demonstrating the potential of LMs as expert agronomists in agricultural decision-making. The framework is designed to be adaptable and robust, capable of handling real-world uncertainties and measurement noise. The paper concludes that the proposed LM-based RL framework has significant potential for improving agricultural practices and sustainability.