REGAL: Refactoring Programs to Discover Generalizable Abstractions

REGAL: Refactoring Programs to Discover Generalizable Abstractions

2024 | Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal
REGAL is a gradient-free method for learning reusable functions through code refactoring, enabling the discovery of generalizable abstractions. The method iteratively refactors programs to develop a library of helper functions, which are verified and refined through execution. REGAL improves program prediction accuracy across diverse domains, outperforming existing models like GPT-3.5 in several tasks. It enhances the reusability and abstraction capabilities of large language models (LLMs), allowing them to generate more accurate and efficient programs by leveraging shared subroutines and environment dynamics. REGAL's abstractions are reusable across examples and domains, and it demonstrates effectiveness in tasks such as LOGO graphics generation, date reasoning, TextCraft, MATH, and TabMWP. The method is trained using a small set of primitive programs and an execution environment, and it can learn from LLM-generated programs without human annotations. REGAL's abstractions are validated through iterative refinement, and it can adapt to distribution shifts by pruning or retraining. The results show that REGAL significantly improves the accuracy of LLM predictions, especially for open-source models, and highlights the importance of abstraction in program synthesis. The method's ability to generalize and reuse functions makes it a valuable tool for improving LLM performance in various domains.REGAL is a gradient-free method for learning reusable functions through code refactoring, enabling the discovery of generalizable abstractions. The method iteratively refactors programs to develop a library of helper functions, which are verified and refined through execution. REGAL improves program prediction accuracy across diverse domains, outperforming existing models like GPT-3.5 in several tasks. It enhances the reusability and abstraction capabilities of large language models (LLMs), allowing them to generate more accurate and efficient programs by leveraging shared subroutines and environment dynamics. REGAL's abstractions are reusable across examples and domains, and it demonstrates effectiveness in tasks such as LOGO graphics generation, date reasoning, TextCraft, MATH, and TabMWP. The method is trained using a small set of primitive programs and an execution environment, and it can learn from LLM-generated programs without human annotations. REGAL's abstractions are validated through iterative refinement, and it can adapt to distribution shifts by pruning or retraining. The results show that REGAL significantly improves the accuracy of LLM predictions, especially for open-source models, and highlights the importance of abstraction in program synthesis. The method's ability to generalize and reuse functions makes it a valuable tool for improving LLM performance in various domains.
Reach us at info@study.space
[slides] ReGAL%3A Refactoring Programs to Discover Generalizable Abstractions | StudySpace