This paper introduces BEAST (Beam Search-based Adversarial Attack), a novel fast and efficient method for attacking Language Models (LMs). BEAST employs beam search and interpretable parameters to balance attack speed, success rate, and adversarial prompt readability. The authors demonstrate that BEAST can jailbreak aligned LMs with high success rates within one minute, outperforming existing gradient-based methods. For instance, BEAST achieves an 89% success rate on Vicuna-7B-v1.5 in just one minute, compared to over an hour for gradient-based methods. Additionally, BEAST can induce hallucinations in LM chatbots, generating 15% more incorrect outputs and 22% irrelevant content. The authors also show that BEAST can enhance membership inference attacks by generating adversarial prompts. The paper includes a detailed evaluation of BEAST's effectiveness and discusses its potential impact on LM security and privacy. The codebase is publicly available at <https://github.com/vinusankars/BEAST>.This paper introduces BEAST (Beam Search-based Adversarial Attack), a novel fast and efficient method for attacking Language Models (LMs). BEAST employs beam search and interpretable parameters to balance attack speed, success rate, and adversarial prompt readability. The authors demonstrate that BEAST can jailbreak aligned LMs with high success rates within one minute, outperforming existing gradient-based methods. For instance, BEAST achieves an 89% success rate on Vicuna-7B-v1.5 in just one minute, compared to over an hour for gradient-based methods. Additionally, BEAST can induce hallucinations in LM chatbots, generating 15% more incorrect outputs and 22% irrelevant content. The authors also show that BEAST can enhance membership inference attacks by generating adversarial prompts. The paper includes a detailed evaluation of BEAST's effectiveness and discusses its potential impact on LM security and privacy. The codebase is publicly available at <https://github.com/vinusankars/BEAST>.