Panacea is an innovative approach to align large language models (LLMs) with diverse and complex human preferences. It reframes the alignment problem as a multi-dimensional preference optimization (MDPO) problem, aiming to train a single model that can adapt Pareto-optimally to various sets of preferences without additional tuning. The key challenge is using a low-dimensional preference vector to guide the model's behavior, which Panacea addresses by incorporating the preference vector into the singular values of a low-rank adaptation (LoRA) layer. This allows the model to be trained end-to-end using a joint objective function aggregated according to the preference vector. Theoretically, Panacea is proven to recover the entire Pareto front under mild conditions. Empirical results demonstrate that Panacea outperforms baseline methods in terms of Pareto optimality, scalability, and efficiency, achieving superior and uniformly distributed Pareto fronts. Panacea is the first approach to effectively align a single LLM with exponentially many heterogeneous preferences, marking a significant advancement in AI alignment.Panacea is an innovative approach to align large language models (LLMs) with diverse and complex human preferences. It reframes the alignment problem as a multi-dimensional preference optimization (MDPO) problem, aiming to train a single model that can adapt Pareto-optimally to various sets of preferences without additional tuning. The key challenge is using a low-dimensional preference vector to guide the model's behavior, which Panacea addresses by incorporating the preference vector into the singular values of a low-rank adaptation (LoRA) layer. This allows the model to be trained end-to-end using a joint objective function aggregated according to the preference vector. Theoretically, Panacea is proven to recover the entire Pareto front under mild conditions. Empirical results demonstrate that Panacea outperforms baseline methods in terms of Pareto optimality, scalability, and efficiency, achieving superior and uniformly distributed Pareto fronts. Panacea is the first approach to effectively align a single LLM with exponentially many heterogeneous preferences, marking a significant advancement in AI alignment.