The paper introduces Readi, a novel framework that enables Large Language Models (LLMs) to efficiently and faithfully reason over structured environments, such as knowledge graphs and tables. Readi addresses the challenge of multi-hop reasoning by allowing LLMs to initially generate a reasoning path and then edit it only when necessary. The framework leverages in-context learning to generate the initial reasoning path and uses feedback from the environment to refine it. Experimental results on multiple datasets, including WebQSP, MQA-3H, and WTQ, demonstrate that Readi significantly outperforms existing LLM-based methods and fine-tuned models, achieving up to 14.9% improvement in accuracy. The paper also provides detailed analysis of Readi's components, highlighting its effectiveness in reasoning path generation and editing, and its efficiency in reducing LLM calls.The paper introduces Readi, a novel framework that enables Large Language Models (LLMs) to efficiently and faithfully reason over structured environments, such as knowledge graphs and tables. Readi addresses the challenge of multi-hop reasoning by allowing LLMs to initially generate a reasoning path and then edit it only when necessary. The framework leverages in-context learning to generate the initial reasoning path and uses feedback from the environment to refine it. Experimental results on multiple datasets, including WebQSP, MQA-3H, and WTQ, demonstrate that Readi significantly outperforms existing LLM-based methods and fine-tuned models, achieving up to 14.9% improvement in accuracy. The paper also provides detailed analysis of Readi's components, highlighting its effectiveness in reasoning path generation and editing, and its efficiency in reducing LLM calls.