The paper addresses the issue of unreliable evaluation of adversarial defenses, which often leads to incorrect assessments of robustness. It proposes two extensions to the PGD attack to overcome common pitfalls such as suboptimal step sizes and gradient obfuscation. These extensions are combined with two existing attacks—FAB and Square Attack—to form a parameter-free, computationally efficient, and user-independent ensemble called *AutoAttack*. The ensemble is tested on over 50 models from recent top machine learning and computer vision conferences. The results show that *AutoAttack* consistently achieves lower robust test accuracy than reported in the original papers, identifying several broken defenses. The paper also discusses the effectiveness of targeted and untargeted attacks, and provides an analysis of the state-of-the-art in adversarial defenses.The paper addresses the issue of unreliable evaluation of adversarial defenses, which often leads to incorrect assessments of robustness. It proposes two extensions to the PGD attack to overcome common pitfalls such as suboptimal step sizes and gradient obfuscation. These extensions are combined with two existing attacks—FAB and Square Attack—to form a parameter-free, computationally efficient, and user-independent ensemble called *AutoAttack*. The ensemble is tested on over 50 models from recent top machine learning and computer vision conferences. The results show that *AutoAttack* consistently achieves lower robust test accuracy than reported in the original papers, identifying several broken defenses. The paper also discusses the effectiveness of targeted and untargeted attacks, and provides an analysis of the state-of-the-art in adversarial defenses.