This paper proposes a multiple-in-one (MiO) image restoration (IR) framework that addresses the challenges of training a single model to handle multiple IR tasks. The framework introduces two strategies: sequential learning and prompt learning. Sequential learning addresses the challenge of optimizing diverse objectives by training the model sequentially on individual tasks, rather than mixing them. Prompt learning addresses the challenge of adapting to different tasks by using prompts to guide the model's understanding of the specific task and improve generalization. The proposed strategies are evaluated on 19 test sets and demonstrate significant improvements in performance for both CNN and Transformer backbones. The experiments show that the two strategies can complement each other, leading to better degradation representation and model robustness. The MiO IR formulation and strategies are expected to facilitate research on training IR models with higher generalization capabilities. The paper also discusses related work, including image restoration backbones, multiple degradation handling, all-in-one IR methods, and prompt learning in IR. The proposed MiO IR model is evaluated on various tasks, including super-resolution, deblurring, denoising, deJPEG, deraining, dehazing, and low-light enhancement. The results show that the proposed strategies improve performance on in-distribution and out-of-distribution test sets, and they can enhance the state-of-the-art method PromptIR. The paper also provides a detailed analysis of degradation representation and shows that the proposed strategies can adjust the restoration style by modifying prompts. The experiments demonstrate that the proposed strategies are effective for both CNN and Transformer backbones and can improve the performance of existing IR methods. The paper concludes that the proposed MiO IR formulation and strategies are promising for future research in image restoration.This paper proposes a multiple-in-one (MiO) image restoration (IR) framework that addresses the challenges of training a single model to handle multiple IR tasks. The framework introduces two strategies: sequential learning and prompt learning. Sequential learning addresses the challenge of optimizing diverse objectives by training the model sequentially on individual tasks, rather than mixing them. Prompt learning addresses the challenge of adapting to different tasks by using prompts to guide the model's understanding of the specific task and improve generalization. The proposed strategies are evaluated on 19 test sets and demonstrate significant improvements in performance for both CNN and Transformer backbones. The experiments show that the two strategies can complement each other, leading to better degradation representation and model robustness. The MiO IR formulation and strategies are expected to facilitate research on training IR models with higher generalization capabilities. The paper also discusses related work, including image restoration backbones, multiple degradation handling, all-in-one IR methods, and prompt learning in IR. The proposed MiO IR model is evaluated on various tasks, including super-resolution, deblurring, denoising, deJPEG, deraining, dehazing, and low-light enhancement. The results show that the proposed strategies improve performance on in-distribution and out-of-distribution test sets, and they can enhance the state-of-the-art method PromptIR. The paper also provides a detailed analysis of degradation representation and shows that the proposed strategies can adjust the restoration style by modifying prompts. The experiments demonstrate that the proposed strategies are effective for both CNN and Transformer backbones and can improve the performance of existing IR methods. The paper concludes that the proposed MiO IR formulation and strategies are promising for future research in image restoration.