InterPreT is an interactive framework that enables robots to learn symbolic predicates and operators from human language feedback during embodied interaction. The framework leverages Large Language Models (LLMs) like GPT-4 to generate Python functions for predicates, which are then refined iteratively based on human feedback. These predicates and operators are compiled into a Planning Domain Definition Language (PDDL) domain, allowing the robot to plan effectively for arbitrary in-domain goals using a PDDL planner. The effectiveness of InterPreT is demonstrated in both simulated and real-world robot manipulation domains, where it reliably uncovers key predicates and operators governing the environment dynamics. Despite being trained on simple tasks, InterPreT exhibits strong generalization to novel tasks with significantly higher complexity, achieving success rates of 73% in simulation and 40% in the real world, outperforming baseline methods. The framework's ability to learn from rich human language feedback and its robustness to varied feedback make it a promising approach for achieving human-like planning proficiency in robotics.InterPreT is an interactive framework that enables robots to learn symbolic predicates and operators from human language feedback during embodied interaction. The framework leverages Large Language Models (LLMs) like GPT-4 to generate Python functions for predicates, which are then refined iteratively based on human feedback. These predicates and operators are compiled into a Planning Domain Definition Language (PDDL) domain, allowing the robot to plan effectively for arbitrary in-domain goals using a PDDL planner. The effectiveness of InterPreT is demonstrated in both simulated and real-world robot manipulation domains, where it reliably uncovers key predicates and operators governing the environment dynamics. Despite being trained on simple tasks, InterPreT exhibits strong generalization to novel tasks with significantly higher complexity, achieving success rates of 73% in simulation and 40% in the real world, outperforming baseline methods. The framework's ability to learn from rich human language feedback and its robustness to varied feedback make it a promising approach for achieving human-like planning proficiency in robotics.