This paper proposes a novel defense technique called Adversarial Training on Purification (AToP), which combines the strengths of adversarial training (AT) and adversarial purification (AP) to achieve both robustness against known attacks and generalization to unseen attacks. AToP consists of two components: perturbation destruction by random transforms (RT) and purifier model fine-tuning (FT) using adversarial loss. RT is essential to avoid overfitting to known attacks and improve generalization to unseen attacks, while FT enhances the robustness of the purifier model. The method is evaluated on CIFAR-10, CIFAR-100, and ImageNet, demonstrating that AToP achieves optimal robustness and generalization against unseen attacks. The purifier model is designed to reconstruct clean examples from corrupted inputs, ensuring accurate classification regardless of whether the inputs are adversarial or corrupted. The method also introduces an adversarial loss derived from classifier outputs to fine-tune the purifier model, improving its performance in robust classification. The results show that AToP significantly improves the performance of the purifier model, surpassing previous methods. The method is effective in defending against both known and unseen attacks, and it is more efficient than traditional AP methods. The paper also discusses the limitations of AToP, including the computational cost associated with fine-tuning the generative model. Overall, AToP provides a promising approach to enhancing the robustness and generalization of deep neural networks against adversarial attacks.This paper proposes a novel defense technique called Adversarial Training on Purification (AToP), which combines the strengths of adversarial training (AT) and adversarial purification (AP) to achieve both robustness against known attacks and generalization to unseen attacks. AToP consists of two components: perturbation destruction by random transforms (RT) and purifier model fine-tuning (FT) using adversarial loss. RT is essential to avoid overfitting to known attacks and improve generalization to unseen attacks, while FT enhances the robustness of the purifier model. The method is evaluated on CIFAR-10, CIFAR-100, and ImageNet, demonstrating that AToP achieves optimal robustness and generalization against unseen attacks. The purifier model is designed to reconstruct clean examples from corrupted inputs, ensuring accurate classification regardless of whether the inputs are adversarial or corrupted. The method also introduces an adversarial loss derived from classifier outputs to fine-tune the purifier model, improving its performance in robust classification. The results show that AToP significantly improves the performance of the purifier model, surpassing previous methods. The method is effective in defending against both known and unseen attacks, and it is more efficient than traditional AP methods. The paper also discusses the limitations of AToP, including the computational cost associated with fine-tuning the generative model. Overall, AToP provides a promising approach to enhancing the robustness and generalization of deep neural networks against adversarial attacks.