Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis

Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis

5 Apr 2024 | Xin Zhou*1, Dingkang Liang*1, Wei Xu1, Xingkui Zhu1, Yihan Xu1, Zhikang Zou2, Xiang Bai†1
The paper "Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis" addresses the challenge of efficient transfer learning for point cloud analysis, aiming to reduce computational costs and storage requirements while maintaining or improving performance. The authors propose a method called Dynamic Adapter and Internal Prompt Tuning (DAPT), which integrates the Dynamic Adapter with Prompt Tuning to dynamically adjust the scale of each token in the input space, capturing instance-specific features more effectively. Key contributions of the paper include: 1. **Dynamic Adapter**: This module generates a dynamic scale for each token, allowing for more adaptive feature adjustment based on the significance of the token. 2. **Internal Prompt Tuning**: It uses the Dynamic Adapter to construct prompts, enhancing the model's ability to capture global perspectives of point clouds. 3. **Performance and Efficiency**: Extensive experiments on five datasets demonstrate that DAPT achieves superior performance compared to full fine-tuning while reducing tunable parameters by 95% and GPU memory usage by 35%. The paper also discusses the limitations of existing parameter-efficient transfer learning methods and provides a detailed analysis of the proposed DAPT, including ablation studies and comparisons with other methods. The results show that DAPT outperforms state-of-the-art methods in various tasks, such as 3D classification, part segmentation, and few-shot learning, while maintaining a low number of tunable parameters.The paper "Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis" addresses the challenge of efficient transfer learning for point cloud analysis, aiming to reduce computational costs and storage requirements while maintaining or improving performance. The authors propose a method called Dynamic Adapter and Internal Prompt Tuning (DAPT), which integrates the Dynamic Adapter with Prompt Tuning to dynamically adjust the scale of each token in the input space, capturing instance-specific features more effectively. Key contributions of the paper include: 1. **Dynamic Adapter**: This module generates a dynamic scale for each token, allowing for more adaptive feature adjustment based on the significance of the token. 2. **Internal Prompt Tuning**: It uses the Dynamic Adapter to construct prompts, enhancing the model's ability to capture global perspectives of point clouds. 3. **Performance and Efficiency**: Extensive experiments on five datasets demonstrate that DAPT achieves superior performance compared to full fine-tuning while reducing tunable parameters by 95% and GPU memory usage by 35%. The paper also discusses the limitations of existing parameter-efficient transfer learning methods and provides a detailed analysis of the proposed DAPT, including ablation studies and comparisons with other methods. The results show that DAPT outperforms state-of-the-art methods in various tasks, such as 3D classification, part segmentation, and few-shot learning, while maintaining a low number of tunable parameters.
Reach us at info@study.space
Understanding Dynamic Adapter Meets Prompt Tuning%3A Parameter-Efficient Transfer Learning for Point Cloud Analysis