This paper investigates the privacy vulnerabilities in machine unlearning, a technique designed to remove training data from machine learning models. The authors propose unlearning inversion attacks, which exploit the difference between the original and unlearned models to reveal sensitive information about the unlearned data. Specifically, these attacks can recover the feature and label of unlearned samples. The effectiveness of the attacks is evaluated on various benchmark datasets and model architectures, demonstrating that they can successfully leak confidential information. The paper also discusses three potential defenses against these attacks, but they come at the cost of reducing the utility of the unlearned model. The study highlights the need for careful design of unlearning mechanisms to prevent the leakage of unlearned data.This paper investigates the privacy vulnerabilities in machine unlearning, a technique designed to remove training data from machine learning models. The authors propose unlearning inversion attacks, which exploit the difference between the original and unlearned models to reveal sensitive information about the unlearned data. Specifically, these attacks can recover the feature and label of unlearned samples. The effectiveness of the attacks is evaluated on various benchmark datasets and model architectures, demonstrating that they can successfully leak confidential information. The paper also discusses three potential defenses against these attacks, but they come at the cost of reducing the utility of the unlearned model. The study highlights the need for careful design of unlearning mechanisms to prevent the leakage of unlearned data.