This paper investigates the theoretical justification of multitask finetuning for adapting pretrained foundation models to downstream tasks with limited labels. The authors propose that a diverse set of related tasks can improve the performance on the target task compared to directly adapting the same pretrained model. They quantify the relationship between finetuning tasks and target tasks using diversity and consistency metrics and propose a practical task selection algorithm. Theoretical analysis shows that with a diverse set of related tasks, multitask finetuning leads to reduced error in the target task. Empirical results confirm that the task selection algorithm effectively chooses related finetuning tasks, improving model performance on target tasks. The study provides new insights into the effective adaptation of foundation models to new tasks with limited labels. The code is available at https://github.com/OliverXUZY/Foudation-Model_Multitask.This paper investigates the theoretical justification of multitask finetuning for adapting pretrained foundation models to downstream tasks with limited labels. The authors propose that a diverse set of related tasks can improve the performance on the target task compared to directly adapting the same pretrained model. They quantify the relationship between finetuning tasks and target tasks using diversity and consistency metrics and propose a practical task selection algorithm. Theoretical analysis shows that with a diverse set of related tasks, multitask finetuning leads to reduced error in the target task. Empirical results confirm that the task selection algorithm effectively chooses related finetuning tasks, improving model performance on target tasks. The study provides new insights into the effective adaptation of foundation models to new tasks with limited labels. The code is available at https://github.com/OliverXUZY/Foudation-Model_Multitask.