DiPromptT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning

DiPromptT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning

2024-03-11 | Sikai Bai, Jie Zhang, Shuaicheng Li, Song Guo, Jingcai Guo, Jun Hou, Tao Han, Xiaocheng Lu
**DiPromptT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning** **Authors:** Sikai Bai, Jie Zhang, Shuaicheng Li, Song Guo, Jingcai Guo, Jun Hou, Tao Han, Xiaocheng Lu **Abstract:** Federated learning (FL) is a powerful paradigm for learning from decentralized data, but it often assumes domain labels are provided during training. This paper introduces DiPromptT, a novel method that addresses the limitations of existing FL methods by learning adaptive prompts for domain generalization in a distributed manner. DiPromptT introduces two types of prompts: global prompts to capture general knowledge across all clients and domain-specific prompts to capture domain-specific knowledge. These prompts eliminate the one-to-one mapping requirement between source domains and local clients. Additionally, a dynamic query metric is introduced to automatically search for the suitable domain label for each sample, enhancing the model's performance on unseen target domains. Extensive experiments on multiple datasets demonstrate that DiPromptT outperforms state-of-the-art FL methods, even when domain labels are not provided, and even surpasses many centralized learning methods using domain labels. **Key Contributions:** 1. **Disentangled Prompt Learning:** DiPromptT learns both global and domain-specific prompts to capture general and specific knowledge, respectively. 2. **Dynamic Query Metric:** A dynamic query metric is introduced to automatically select the appropriate domain label for each sample, enhancing domain generalization. 3. **Efficiency and Flexibility:** The method is lightweight and flexible, allowing for greater flexibility in choosing the number of clients compared to traditional FL methods. **Related Work:** - **Federated Learning (FL):** FL enables multiple clients to collaboratively learn a global model without exchanging private data. - **Domain Generalization (DG):** DG aims to generalize a learned model from multiple source domains to unseen target domains. - **Prompt Tuning:** Prompt tuning is a transfer learning paradigm that adds learnable embeddings to input tokens, enabling fast adaptation to various downstream tasks. **Methodology:** - **Disentangled Prompt Learning:** DiPromptT includes global prompt tuning and domain prompt tuning to capture generic and specific knowledge. - **Dynamic Query Scheme (Q-Prompt):** A dynamic query scheme is designed to select appropriate domain prompts for different source inputs, enhancing domain generalization. - **Collaborative Ensemble Process:** A dynamic ensemble metric is used to combine knowledge from different prompts, improving target inference. **Experiments:** - **Dataset and Baselines:** DiPromptT is evaluated on three benchmark datasets (PACS, OfficeHome, VLCS) and compared with state-of-the-art methods. - **Performance Comparison:** DiPromptT outperforms existing methods in most settings, demonstrating its effectiveness and robustness. - **Ablation Study:** An ablation study shows the importance of each component in DiPromptT. - **Additional Analysis:** DiPrompt**DiPromptT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning** **Authors:** Sikai Bai, Jie Zhang, Shuaicheng Li, Song Guo, Jingcai Guo, Jun Hou, Tao Han, Xiaocheng Lu **Abstract:** Federated learning (FL) is a powerful paradigm for learning from decentralized data, but it often assumes domain labels are provided during training. This paper introduces DiPromptT, a novel method that addresses the limitations of existing FL methods by learning adaptive prompts for domain generalization in a distributed manner. DiPromptT introduces two types of prompts: global prompts to capture general knowledge across all clients and domain-specific prompts to capture domain-specific knowledge. These prompts eliminate the one-to-one mapping requirement between source domains and local clients. Additionally, a dynamic query metric is introduced to automatically search for the suitable domain label for each sample, enhancing the model's performance on unseen target domains. Extensive experiments on multiple datasets demonstrate that DiPromptT outperforms state-of-the-art FL methods, even when domain labels are not provided, and even surpasses many centralized learning methods using domain labels. **Key Contributions:** 1. **Disentangled Prompt Learning:** DiPromptT learns both global and domain-specific prompts to capture general and specific knowledge, respectively. 2. **Dynamic Query Metric:** A dynamic query metric is introduced to automatically select the appropriate domain label for each sample, enhancing domain generalization. 3. **Efficiency and Flexibility:** The method is lightweight and flexible, allowing for greater flexibility in choosing the number of clients compared to traditional FL methods. **Related Work:** - **Federated Learning (FL):** FL enables multiple clients to collaboratively learn a global model without exchanging private data. - **Domain Generalization (DG):** DG aims to generalize a learned model from multiple source domains to unseen target domains. - **Prompt Tuning:** Prompt tuning is a transfer learning paradigm that adds learnable embeddings to input tokens, enabling fast adaptation to various downstream tasks. **Methodology:** - **Disentangled Prompt Learning:** DiPromptT includes global prompt tuning and domain prompt tuning to capture generic and specific knowledge. - **Dynamic Query Scheme (Q-Prompt):** A dynamic query scheme is designed to select appropriate domain prompts for different source inputs, enhancing domain generalization. - **Collaborative Ensemble Process:** A dynamic ensemble metric is used to combine knowledge from different prompts, improving target inference. **Experiments:** - **Dataset and Baselines:** DiPromptT is evaluated on three benchmark datasets (PACS, OfficeHome, VLCS) and compared with state-of-the-art methods. - **Performance Comparison:** DiPromptT outperforms existing methods in most settings, demonstrating its effectiveness and robustness. - **Ablation Study:** An ablation study shows the importance of each component in DiPromptT. - **Additional Analysis:** DiPrompt
Reach us at info@study.space
Understanding DiPrompT%3A Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning