TextGrad: Automatic "Differentiation" via Text

TextGrad: Automatic "Differentiation" via Text

11 Jun 2024 | Mert Yuksekgonul, Federico Bianchi, Joseph Boen, Sheng Liu, Zhi Huang, Carlos Guestrin, James Zou
TEXTGRAD is an automatic "differentiation" framework that uses text feedback from large language models (LLMs) to optimize complex AI systems. Inspired by backpropagation in neural networks, TEXTGRAD transforms AI systems into computation graphs, where variables are inputs and outputs of complex functions. LLMs provide natural language feedback to variables, describing how they should be modified to improve the system. This feedback is then propagated through the graph to optimize the system. TEXTGRAD follows PyTorch's syntax and abstraction, making it flexible and easy to use. It works out-of-the-box for various tasks, where users only need to provide the objective function without tuning components or prompts. TEXTGRAD has been demonstrated across diverse applications, including question answering, molecule optimization, and radiotherapy treatment planning. It improves the zero-shot accuracy of GPT-4o in Google-Proof Question Answering from 51% to 55%, achieves a 20% relative performance gain in optimizing LeetCode-Hard coding problems, and designs new druglike small molecules with desirable in silico binding. It also optimizes radiation oncology treatment plans with high specificity. TEXTGRAD's framework allows for both instance optimization (optimizing specific solutions) and prompt optimization (improving LLM performance through prompts). It uses textual gradients, which are natural language feedback from LLMs, to guide the optimization process. The framework supports a wide range of optimization tasks, including multi-objective optimization in drug discovery and radiotherapy treatment planning. TEXTGRAD's ability to automatically optimize AI systems through text feedback has the potential to accelerate the development of the next generation of AI systems. The framework is open-sourced at https://github.com/zou-group/textgrad.TEXTGRAD is an automatic "differentiation" framework that uses text feedback from large language models (LLMs) to optimize complex AI systems. Inspired by backpropagation in neural networks, TEXTGRAD transforms AI systems into computation graphs, where variables are inputs and outputs of complex functions. LLMs provide natural language feedback to variables, describing how they should be modified to improve the system. This feedback is then propagated through the graph to optimize the system. TEXTGRAD follows PyTorch's syntax and abstraction, making it flexible and easy to use. It works out-of-the-box for various tasks, where users only need to provide the objective function without tuning components or prompts. TEXTGRAD has been demonstrated across diverse applications, including question answering, molecule optimization, and radiotherapy treatment planning. It improves the zero-shot accuracy of GPT-4o in Google-Proof Question Answering from 51% to 55%, achieves a 20% relative performance gain in optimizing LeetCode-Hard coding problems, and designs new druglike small molecules with desirable in silico binding. It also optimizes radiation oncology treatment plans with high specificity. TEXTGRAD's framework allows for both instance optimization (optimizing specific solutions) and prompt optimization (improving LLM performance through prompts). It uses textual gradients, which are natural language feedback from LLMs, to guide the optimization process. The framework supports a wide range of optimization tasks, including multi-objective optimization in drug discovery and radiotherapy treatment planning. TEXTGRAD's ability to automatically optimize AI systems through text feedback has the potential to accelerate the development of the next generation of AI systems. The framework is open-sourced at https://github.com/zou-group/textgrad.
Reach us at info@study.space