This research study investigates the impact of sampling temperature on the performance of Large Language Models (LLMs) in solving various problem-solving tasks. The study uses a multiple-choice question-and-answer (MCQA) exam, randomly sampling problems from standard LLM benchmarks. Nine popular LLMs and five prompt-engineering techniques are used to solve the MCQA problems while varying the sampling temperature from 0.0 to 1.6. The results indicate that changes in temperature from 0.0 to 1.0 do not have a statistically significant impact on LLM performance for problem-solving tasks. These findings generalize across LLMs, prompt-engineering techniques, and problem domains. The study recommends setting the sampling temperature to 0.0 for problem-solving tasks to maximize reproducibility and avoid performance drops beyond a temperature of 1.0. However, exceptions may exist for specific LLMs, prompt-engineering techniques, or problem domains. The research provides practical implications for AI engineers and theoretical insights for researchers studying model hallucination and solution-space search with LLMs. Future research could explore additional LLMs, broader problem-solving tasks, and extended temperature ranges.This research study investigates the impact of sampling temperature on the performance of Large Language Models (LLMs) in solving various problem-solving tasks. The study uses a multiple-choice question-and-answer (MCQA) exam, randomly sampling problems from standard LLM benchmarks. Nine popular LLMs and five prompt-engineering techniques are used to solve the MCQA problems while varying the sampling temperature from 0.0 to 1.6. The results indicate that changes in temperature from 0.0 to 1.0 do not have a statistically significant impact on LLM performance for problem-solving tasks. These findings generalize across LLMs, prompt-engineering techniques, and problem domains. The study recommends setting the sampling temperature to 0.0 for problem-solving tasks to maximize reproducibility and avoid performance drops beyond a temperature of 1.0. However, exceptions may exist for specific LLMs, prompt-engineering techniques, or problem domains. The research provides practical implications for AI engineers and theoretical insights for researchers studying model hallucination and solution-space search with LLMs. Future research could explore additional LLMs, broader problem-solving tasks, and extended temperature ranges.