RA-ISF is a retrieval-augmented framework that iteratively decomposes tasks into three submodules to enhance the model's problem-solving capabilities. The framework includes a Self-Knowledge Module, a Passage Relevance Module, and a Question Decomposition Module. The Self-Knowledge Module determines if the question can be answered using internal knowledge. If not, the Passage Relevance Module assesses the relevance of retrieved passages to the question. If no relevant passages are found, the Question Decomposition Module breaks the question into sub-questions, which are then processed iteratively. This approach improves the model's ability to handle complex questions and reduces hallucinations by filtering out irrelevant information. Experiments show that RA-ISF outperforms existing methods on models like GPT3.5 and Llama2, achieving better performance in factual reasoning and reducing hallucinations. The framework is evaluated on various datasets, including NQ, TriviaQA, StrategyQA, HotpotQA, and 2WikiMQA. Results indicate that RA-ISF significantly improves performance compared to baselines such as RAG, Direct Prompting, and Least-to-most. The framework also demonstrates effectiveness on smaller LLMs and reduces the impact of irrelevant retrieved texts. The method is efficient, with a controlled iteration threshold and optimized retrieval and decomposition processes. Overall, RA-ISF enhances the model's ability to answer complex questions by integrating external knowledge and iteratively refining the problem-solving process.RA-ISF is a retrieval-augmented framework that iteratively decomposes tasks into three submodules to enhance the model's problem-solving capabilities. The framework includes a Self-Knowledge Module, a Passage Relevance Module, and a Question Decomposition Module. The Self-Knowledge Module determines if the question can be answered using internal knowledge. If not, the Passage Relevance Module assesses the relevance of retrieved passages to the question. If no relevant passages are found, the Question Decomposition Module breaks the question into sub-questions, which are then processed iteratively. This approach improves the model's ability to handle complex questions and reduces hallucinations by filtering out irrelevant information. Experiments show that RA-ISF outperforms existing methods on models like GPT3.5 and Llama2, achieving better performance in factual reasoning and reducing hallucinations. The framework is evaluated on various datasets, including NQ, TriviaQA, StrategyQA, HotpotQA, and 2WikiMQA. Results indicate that RA-ISF significantly improves performance compared to baselines such as RAG, Direct Prompting, and Least-to-most. The framework also demonstrates effectiveness on smaller LLMs and reduces the impact of irrelevant retrieved texts. The method is efficient, with a controlled iteration threshold and optimized retrieval and decomposition processes. Overall, RA-ISF enhances the model's ability to answer complex questions by integrating external knowledge and iteratively refining the problem-solving process.