Defending Against Social Engineering Attacks in the Age of LLMs

Defending Against Social Engineering Attacks in the Age of LLMs

18 Jun 2024 | Lin Ai, Tharindu Kumarage, Amrita Bhattacharjee, Zizhou Liu, Zheng Hui, Michael Davinroy, James Cook, Laura Cassani, Kirill Trapeznikov, Matthias Kirchner, Arslan Basharat, Anthony Hoogs, Joshua Garland, Huan Liu, Julia Hirschberg
The paper "Defending Against Social Engineering Attacks in the Age of LLMs" by Lin Ai et al. explores the dual role of Large Language Models (LLMs) in facilitating and defending against chat-based social engineering (CSE) attacks. The authors develop a novel dataset, SECovo, which simulates CSE scenarios in academic and recruitment contexts to examine how LLMs can be exploited. They find that while off-the-shelf LLMs generate high-quality CSE content, their detection capabilities are suboptimal, leading to increased operational costs for defense. To address this, they propose Convosentinel, a modular defense pipeline that enhances detection at both the message and conversation levels, offering improved adaptability and cost-effectiveness. The retrieval-augmented module in Convosentinel identifies malicious intent by comparing messages to a database of similar conversations, enhancing CSE detection at all stages. The study highlights the need for advanced strategies to leverage LLMs in cybersecurity and provides a foundational framework for understanding and addressing the challenges posed by LLMs in CSE contexts. The code and data are available on a GitHub repository.The paper "Defending Against Social Engineering Attacks in the Age of LLMs" by Lin Ai et al. explores the dual role of Large Language Models (LLMs) in facilitating and defending against chat-based social engineering (CSE) attacks. The authors develop a novel dataset, SECovo, which simulates CSE scenarios in academic and recruitment contexts to examine how LLMs can be exploited. They find that while off-the-shelf LLMs generate high-quality CSE content, their detection capabilities are suboptimal, leading to increased operational costs for defense. To address this, they propose Convosentinel, a modular defense pipeline that enhances detection at both the message and conversation levels, offering improved adaptability and cost-effectiveness. The retrieval-augmented module in Convosentinel identifies malicious intent by comparing messages to a database of similar conversations, enhancing CSE detection at all stages. The study highlights the need for advanced strategies to leverage LLMs in cybersecurity and provides a foundational framework for understanding and addressing the challenges posed by LLMs in CSE contexts. The code and data are available on a GitHub repository.
Reach us at info@study.space