DiPromptT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning

DiPromptT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning

11 Mar 2024 | Sikai Bai, Jie Zhang, Shuaicheng Li, Song Guo, Jingcai Guo, Jun Hou, Tao Han, Xiaocheng Lu
DiPrompT is a novel approach for federated domain generalization, designed to address the limitations of existing methods that require explicit domain labels and one-to-one mapping between clients and domains. The method introduces two types of prompts: a global prompt to capture general knowledge across all clients and domain-specific prompts to capture domain-specific knowledge. These prompts enable the model to learn invariant knowledge without strict domain-client mapping, and a dynamic query metric is used to automatically determine the suitable domain label for each sample. The approach is evaluated on multiple datasets, demonstrating superior performance in domain generalization compared to state-of-the-art methods, even when domain labels are not provided. The method also outperforms centralized learning methods that rely on domain labels. DiPrompT is efficient and effective in handling the challenges of federated learning, including data imbalance and privacy concerns. The framework includes a collaborative ensemble scheme during inference, which leverages both global and domain-specific prompts to improve target domain prediction. The method is validated through extensive experiments on benchmark datasets, showing its effectiveness in various scenarios, including few-shot learning and different backbone architectures. The results indicate that DiPrompT achieves the best average accuracy across multiple datasets and outperforms previous methods in most settings. The method is also efficient in terms of computation and communication costs, making it suitable for real-world applications.DiPrompT is a novel approach for federated domain generalization, designed to address the limitations of existing methods that require explicit domain labels and one-to-one mapping between clients and domains. The method introduces two types of prompts: a global prompt to capture general knowledge across all clients and domain-specific prompts to capture domain-specific knowledge. These prompts enable the model to learn invariant knowledge without strict domain-client mapping, and a dynamic query metric is used to automatically determine the suitable domain label for each sample. The approach is evaluated on multiple datasets, demonstrating superior performance in domain generalization compared to state-of-the-art methods, even when domain labels are not provided. The method also outperforms centralized learning methods that rely on domain labels. DiPrompT is efficient and effective in handling the challenges of federated learning, including data imbalance and privacy concerns. The framework includes a collaborative ensemble scheme during inference, which leverages both global and domain-specific prompts to improve target domain prediction. The method is validated through extensive experiments on benchmark datasets, showing its effectiveness in various scenarios, including few-shot learning and different backbone architectures. The results indicate that DiPrompT achieves the best average accuracy across multiple datasets and outperforms previous methods in most settings. The method is also efficient in terms of computation and communication costs, making it suitable for real-world applications.
Reach us at info@study.space
[slides] DiPrompT%3A Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning | StudySpace