This paper explores the feasibility of *LLM routing*, a method to efficiently select the most suitable single Large Language Model (LLM) for each input query. The authors propose *LLM routing* to improve performance in challenging reasoning tasks by leveraging the capabilities of multiple LLMs. Extensive experiments on two benchmarks (GSM8K and MMLU) using 7 open-source LLMs suggest that while routing shows promise, it is not feasible in all scenarios. The paper discusses the limitations of the approach, including the need for more robust approaches and larger datasets. The authors also introduce theoretical upper bounds for the performance of the routing model and compare them with the performance of individual LLMs. The findings indicate that the routing model's performance is better than weak LLMs but similar to or slightly lower than top-performing LLMs, possibly due to limited training data. The paper concludes by highlighting the potential of LLM routing as a promising direction for efficient LLM usage, with future research directions including larger datasets, better routing policies, and scaling up to more diverse LLMs and benchmarks.This paper explores the feasibility of *LLM routing*, a method to efficiently select the most suitable single Large Language Model (LLM) for each input query. The authors propose *LLM routing* to improve performance in challenging reasoning tasks by leveraging the capabilities of multiple LLMs. Extensive experiments on two benchmarks (GSM8K and MMLU) using 7 open-source LLMs suggest that while routing shows promise, it is not feasible in all scenarios. The paper discusses the limitations of the approach, including the need for more robust approaches and larger datasets. The authors also introduce theoretical upper bounds for the performance of the routing model and compare them with the performance of individual LLMs. The findings indicate that the routing model's performance is better than weak LLMs but similar to or slightly lower than top-performing LLMs, possibly due to limited training data. The paper concludes by highlighting the potential of LLM routing as a promising direction for efficient LLM usage, with future research directions including larger datasets, better routing policies, and scaling up to more diverse LLMs and benchmarks.