**R.A.C.E.: Robust Adversarial Concept Erasure for Secure Text-to-Image Diffusion Model**
In the evolving landscape of text-to-image (T2I) diffusion models, the capability to generate high-quality images from textual descriptions faces challenges with the potential misuse of reproducing sensitive content. To address this critical issue, the authors introduce RACE (Robust Adversarial Concept Erase), a novel approach designed to enhance the robustness of concept erasure methods for T2I models. RACE utilizes a sophisticated adversarial training framework to identify and mitigate adversarial text embeddings, significantly reducing the Attack Success Rate (ASR). Notably, RACE achieves a 30 percentage point reduction in ASR for the "nudity" concept against the leading white-box attack method. Extensive evaluations demonstrate RACE's effectiveness in defending against both white-box and black-box attacks, marking a significant advancement in protecting T2I diffusion models from generating inappropriate or misleading imagery.
**Key Contributions:**
1. The first adversarial training approach specifically designed to fortify concept erasure methods against prompt-based adversarial attacks without introducing additional modules.
2. A computationally efficient adversarial attack method that can be plugged into the concept erasing workflow.
3. Significant improvement in T2I models' robustness against prompts based on white/black box attacks.
**Method:**
RACE leverages an adversarial training framework to identify and mitigate adversarial text embeddings. The method effectively identifies adversarial text embeddings within a single time step of the diffusion process, streamlining the identification of adversarial examples and facilitating their integration into the concept erasure workflow. The adversarial training loss function is designed to eliminate not only the targeted concept's embedding but also its adjacent embeddings within the model's latent space, enhancing the robustness of concept erasure.
**Experiments:**
RACE is evaluated on various datasets, including artistic styles, explicit concepts, and identifiable objects. The results show a significant reduction in ASR for the "nudity" concept, with a notable 33% reduction compared to the state-of-the-art attack method. RACE also maintains image quality metrics, demonstrating a trade-off between robustness and image fidelity.
**Conclusion:**
RACE is a novel defense approach that strengthens the concept erasure capabilities of T2I models while maintaining computational efficiency. It offers a valuable enhancement to current erasure frameworks and robust defenses against various adversarial techniques. The work highlights the critical importance of developing sophisticated defenses in the rapidly evolving domain of generative AI.**R.A.C.E.: Robust Adversarial Concept Erasure for Secure Text-to-Image Diffusion Model**
In the evolving landscape of text-to-image (T2I) diffusion models, the capability to generate high-quality images from textual descriptions faces challenges with the potential misuse of reproducing sensitive content. To address this critical issue, the authors introduce RACE (Robust Adversarial Concept Erase), a novel approach designed to enhance the robustness of concept erasure methods for T2I models. RACE utilizes a sophisticated adversarial training framework to identify and mitigate adversarial text embeddings, significantly reducing the Attack Success Rate (ASR). Notably, RACE achieves a 30 percentage point reduction in ASR for the "nudity" concept against the leading white-box attack method. Extensive evaluations demonstrate RACE's effectiveness in defending against both white-box and black-box attacks, marking a significant advancement in protecting T2I diffusion models from generating inappropriate or misleading imagery.
**Key Contributions:**
1. The first adversarial training approach specifically designed to fortify concept erasure methods against prompt-based adversarial attacks without introducing additional modules.
2. A computationally efficient adversarial attack method that can be plugged into the concept erasing workflow.
3. Significant improvement in T2I models' robustness against prompts based on white/black box attacks.
**Method:**
RACE leverages an adversarial training framework to identify and mitigate adversarial text embeddings. The method effectively identifies adversarial text embeddings within a single time step of the diffusion process, streamlining the identification of adversarial examples and facilitating their integration into the concept erasure workflow. The adversarial training loss function is designed to eliminate not only the targeted concept's embedding but also its adjacent embeddings within the model's latent space, enhancing the robustness of concept erasure.
**Experiments:**
RACE is evaluated on various datasets, including artistic styles, explicit concepts, and identifiable objects. The results show a significant reduction in ASR for the "nudity" concept, with a notable 33% reduction compared to the state-of-the-art attack method. RACE also maintains image quality metrics, demonstrating a trade-off between robustness and image fidelity.
**Conclusion:**
RACE is a novel defense approach that strengthens the concept erasure capabilities of T2I models while maintaining computational efficiency. It offers a valuable enhancement to current erasure frameworks and robust defenses against various adversarial techniques. The work highlights the critical importance of developing sophisticated defenses in the rapidly evolving domain of generative AI.