This paper presents a novel method for evaluating hallucination in large language models (LLMs) through question answering (QA) tasks based on unanswerable math word problems (MWP). The authors develop a dataset called Unanswerable Math Word Problem (UMWP), consisting of 5200 questions across five categories, half of which are unanswerable. They propose an evaluation methodology that combines text similarity and mathematical expression detection to assess whether LLMs consider a question unanswerable. Extensive experiments on 31 LLMs, including GPT-3, InstructGPT, LLaMA, and Claude, demonstrate that in-context learning and reinforcement learning with human feedback (RLHF) significantly enhance the models' ability to avoid hallucination. The results show that MWP is a reliable and effective approach to assess hallucination in LLMs. The paper also discusses the impact of model size, input form, and RLHF on hallucination mitigation. The code and data are available at <https://github.com/Yuki-Asuna/UMWP>.This paper presents a novel method for evaluating hallucination in large language models (LLMs) through question answering (QA) tasks based on unanswerable math word problems (MWP). The authors develop a dataset called Unanswerable Math Word Problem (UMWP), consisting of 5200 questions across five categories, half of which are unanswerable. They propose an evaluation methodology that combines text similarity and mathematical expression detection to assess whether LLMs consider a question unanswerable. Extensive experiments on 31 LLMs, including GPT-3, InstructGPT, LLaMA, and Claude, demonstrate that in-context learning and reinforcement learning with human feedback (RLHF) significantly enhance the models' ability to avoid hallucination. The results show that MWP is a reliable and effective approach to assess hallucination in LLMs. The paper also discusses the impact of model size, input form, and RLHF on hallucination mitigation. The code and data are available at <https://github.com/Yuki-Asuna/UMWP>.