The paper introduces a novel method for erasing specific concepts from text-to-image diffusion models using few-shot unlearning. The method updates the text encoder with minimal changes, allowing for rapid concept erasure within 10 seconds. Unlike existing methods that often require retraining the entire model, this approach focuses on updating the text encoder to reduce the alignment between the text and the generated images. The method is inspired by textual inversion but uses a more efficient loss function and a smaller number of images. Experiments demonstrate that the proposed method can effectively erase concepts while preserving the quality of other concepts. The paper also discusses the impact of different parameters and provides ablation studies to validate the effectiveness of the method. The results show that the method can erase concepts more quickly and naturally compared to current methods, making it a promising solution for removing undesirable content from pre-trained models.The paper introduces a novel method for erasing specific concepts from text-to-image diffusion models using few-shot unlearning. The method updates the text encoder with minimal changes, allowing for rapid concept erasure within 10 seconds. Unlike existing methods that often require retraining the entire model, this approach focuses on updating the text encoder to reduce the alignment between the text and the generated images. The method is inspired by textual inversion but uses a more efficient loss function and a smaller number of images. Experiments demonstrate that the proposed method can effectively erase concepts while preserving the quality of other concepts. The paper also discusses the impact of different parameters and provides ablation studies to validate the effectiveness of the method. The results show that the method can erase concepts more quickly and naturally compared to current methods, making it a promising solution for removing undesirable content from pre-trained models.