This paper presents LARMOR, an unsupervised approach for dense retriever selection that leverages large language models (LLMs) to generate pseudo-relevant queries, labels, and reference lists based on a subset of the target corpus. The method eliminates the need for training corpora or test labels, and instead uses the target corpus alone to select the most effective dense retriever. The proposed approach outperforms existing baselines in both dense retriever selection and ranking. The method is evaluated using a large pool of state-of-the-art dense retrievers, and the results show that LARMOR achieves strong performance across multiple collections. The paper also discusses the challenges of existing methods for dense retriever selection, particularly in scenarios with domain shift. The key contributions include the introduction of LARMOR, the evaluation of its performance using a large set of dense retrievers, and the analysis of factors that impact its effectiveness, such as the type and size of LLMs used and the number of generated queries per document. The paper also explores the application of LARMOR to other IR models, such as re-rankers and sparse models. The results demonstrate that LARMOR is highly effective in selecting the most suitable dense retriever for a target corpus, even in the absence of queries or relevance labels.This paper presents LARMOR, an unsupervised approach for dense retriever selection that leverages large language models (LLMs) to generate pseudo-relevant queries, labels, and reference lists based on a subset of the target corpus. The method eliminates the need for training corpora or test labels, and instead uses the target corpus alone to select the most effective dense retriever. The proposed approach outperforms existing baselines in both dense retriever selection and ranking. The method is evaluated using a large pool of state-of-the-art dense retrievers, and the results show that LARMOR achieves strong performance across multiple collections. The paper also discusses the challenges of existing methods for dense retriever selection, particularly in scenarios with domain shift. The key contributions include the introduction of LARMOR, the evaluation of its performance using a large set of dense retrievers, and the analysis of factors that impact its effectiveness, such as the type and size of LLMs used and the number of generated queries per document. The paper also explores the application of LARMOR to other IR models, such as re-rankers and sparse models. The results demonstrate that LARMOR is highly effective in selecting the most suitable dense retriever for a target corpus, even in the absence of queries or relevance labels.