Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis

Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis

5 Apr 2024 | Xin Zhou*, Dingkang Liang*, Wei Xu, Xingkui Zhu, Yihan Xu, Zhikang Zou, Xiang Bai†
This paper proposes DAPT, a parameter-efficient transfer learning method for point cloud analysis. The method combines Dynamic Adapter and Prompt Tuning to achieve a balance between performance and parameter efficiency. The Dynamic Adapter dynamically adjusts each token's scale based on its significance to the downstream task, while Internal Prompt Tuning uses the Dynamic Adapter to generate instance-specific prompts. This approach significantly reduces the number of tunable parameters and training GPU memory compared to full fine-tuning, achieving superior performance on challenging datasets like ScanObjectNN. Experiments on five datasets show that DAPT reduces tunable parameters by 95% and GPU memory usage by 35% while maintaining or improving performance. The method is effective for 3D classification, part segmentation, and few-shot learning, and outperforms existing parameter-efficient transfer learning methods. The key contributions include revealing the limitations of existing parameter-efficient transfer learning methods in point cloud analysis, proposing the Dynamic Adapter, and introducing Internal Prompt Tuning. The method is efficient, practical, and suitable for adapting to increasingly larger 3D models.This paper proposes DAPT, a parameter-efficient transfer learning method for point cloud analysis. The method combines Dynamic Adapter and Prompt Tuning to achieve a balance between performance and parameter efficiency. The Dynamic Adapter dynamically adjusts each token's scale based on its significance to the downstream task, while Internal Prompt Tuning uses the Dynamic Adapter to generate instance-specific prompts. This approach significantly reduces the number of tunable parameters and training GPU memory compared to full fine-tuning, achieving superior performance on challenging datasets like ScanObjectNN. Experiments on five datasets show that DAPT reduces tunable parameters by 95% and GPU memory usage by 35% while maintaining or improving performance. The method is effective for 3D classification, part segmentation, and few-shot learning, and outperforms existing parameter-efficient transfer learning methods. The key contributions include revealing the limitations of existing parameter-efficient transfer learning methods in point cloud analysis, proposing the Dynamic Adapter, and introducing Internal Prompt Tuning. The method is efficient, practical, and suitable for adapting to increasingly larger 3D models.
Reach us at info@study.space