The paper "Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness" addresses the issue of adversarial robustness in large-scale pre-trained vision-language models like CLIP, which are vulnerable to imperceptible adversarial examples. The authors propose a method called Pre-trained Model Guided Adversarial Fine-Tuning (PMG-AFT) to enhance the model's zero-shot adversarial robustness without overfitting to the target dataset. PMG-AFT leverages supervision from the original pre-trained model by designing an auxiliary branch that minimizes the distance between the features of adversarial examples in the target model and those in the pre-trained model. This approach aims to preserve the generalization features captured by the pre-trained model. Extensive experiments on 15 zero-shot datasets demonstrate that PMG-AFT significantly outperforms state-of-the-art methods, improving the top-1 robust accuracy by an average of 4.99% and clean accuracy by an average of 8.72%. The method effectively mitigates overfitting and maintains the model's zero-shot adversarial robustness, making it a valuable contribution to the field of adversarial robustness in large-scale pre-trained models.The paper "Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness" addresses the issue of adversarial robustness in large-scale pre-trained vision-language models like CLIP, which are vulnerable to imperceptible adversarial examples. The authors propose a method called Pre-trained Model Guided Adversarial Fine-Tuning (PMG-AFT) to enhance the model's zero-shot adversarial robustness without overfitting to the target dataset. PMG-AFT leverages supervision from the original pre-trained model by designing an auxiliary branch that minimizes the distance between the features of adversarial examples in the target model and those in the pre-trained model. This approach aims to preserve the generalization features captured by the pre-trained model. Extensive experiments on 15 zero-shot datasets demonstrate that PMG-AFT significantly outperforms state-of-the-art methods, improving the top-1 robust accuracy by an average of 4.99% and clean accuracy by an average of 8.72%. The method effectively mitigates overfitting and maintains the model's zero-shot adversarial robustness, making it a valuable contribution to the field of adversarial robustness in large-scale pre-trained models.