The paper introduces Rewards-in-Context (RIC), a method for aligning foundation models with human preferences in a multi-objective setting. RIC conditions the response of a foundation model on multiple rewards in its prompt context and applies supervised fine-tuning for alignment. The key features of RIC are simplicity and adaptivity, as it only requires supervised fine-tuning of a single foundation model and supports dynamic adjustment for user preferences during inference. Inspired by the analytical solution of an abstracted convex optimization problem, RIC approaches the Pareto-optimal solution for multiple objectives. Empirical results demonstrate that RIC effectively aligns both Large Language Models (LLMs) and diffusion models to accommodate diverse rewards with minimal GPU hours compared to multi-objective reinforcement learning (RL) baselines. The paper also includes a detailed background on supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and multi-objective RLHF (MORLHF). The experimental setup and results on text generation and text-to-image tasks show that RIC outperforms baselines in achieving a superior empirical front.The paper introduces Rewards-in-Context (RIC), a method for aligning foundation models with human preferences in a multi-objective setting. RIC conditions the response of a foundation model on multiple rewards in its prompt context and applies supervised fine-tuning for alignment. The key features of RIC are simplicity and adaptivity, as it only requires supervised fine-tuning of a single foundation model and supports dynamic adjustment for user preferences during inference. Inspired by the analytical solution of an abstracted convex optimization problem, RIC approaches the Pareto-optimal solution for multiple objectives. Empirical results demonstrate that RIC effectively aligns both Large Language Models (LLMs) and diffusion models to accommodate diverse rewards with minimal GPU hours compared to multi-objective reinforcement learning (RL) baselines. The paper also includes a detailed background on supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and multi-objective RLHF (MORLHF). The experimental setup and results on text generation and text-to-image tasks show that RIC outperforms baselines in achieving a superior empirical front.