This paper addresses the effectiveness of existing methods in CLIP-based few-shot learning by introducing a unified formulation from the perspective of logit bias. It dissects the computation of logit bias into three key components: logit features, logit predictor, and logit fusion, and analyzes their impact on few-shot classification performance. Based on this analysis, the authors propose the AMU-Tuning method, which learns effective logit bias by exploiting appropriate auxiliary features, using a feature-initialized linear classifier with multi-branch training, and incorporating uncertainty-based fusion. Extensive experiments on various downstream tasks and out-of-distribution benchmarks demonstrate that AMU-Tuning outperforms existing methods, achieving state-of-the-art performance in CLIP-based few-shot learning with improved efficiency. The contributions of the work include a unified framework for analyzing CLIP-based few-shot learning methods, the development of AMU-Tuning, and its superior performance on multiple datasets.This paper addresses the effectiveness of existing methods in CLIP-based few-shot learning by introducing a unified formulation from the perspective of logit bias. It dissects the computation of logit bias into three key components: logit features, logit predictor, and logit fusion, and analyzes their impact on few-shot classification performance. Based on this analysis, the authors propose the AMU-Tuning method, which learns effective logit bias by exploiting appropriate auxiliary features, using a feature-initialized linear classifier with multi-branch training, and incorporating uncertainty-based fusion. Extensive experiments on various downstream tasks and out-of-distribution benchmarks demonstrate that AMU-Tuning outperforms existing methods, achieving state-of-the-art performance in CLIP-based few-shot learning with improved efficiency. The contributions of the work include a unified framework for analyzing CLIP-based few-shot learning methods, the development of AMU-Tuning, and its superior performance on multiple datasets.