This paper explores the application of Large Language Models (LLMs) in detecting scams, a critical aspect of cybersecurity. LLMs, known for their ability to process and generate natural language, are being increasingly used in security applications such as phishing detection, sentiment analysis, threat intelligence, malware analysis, and vulnerability assessment. The paper proposes a novel use case for LLMs in scam detection, including phishing, advance fee fraud, romance scams, and others. It outlines the key steps in building an effective scam detector using LLMs, including data collection, preprocessing, model selection, training, and integration into target systems. The paper also presents a preliminary evaluation using GPT-3.5 and GPT-4 on a duplicated email, showing their proficiency in identifying common signs of phishing or scam emails. The results demonstrate the models' effectiveness in recognizing suspicious elements, but the paper emphasizes the need for a comprehensive assessment across various language tasks. The paper concludes that while both GPT-3.5 and GPT-4 perform well in this specific analysis, a more comprehensive assessment is needed to determine their relative strengths and weaknesses across various natural language understanding and generation tasks. The paper also discusses the importance of ongoing refinement and collaboration with cybersecurity experts to adapt to evolving threats.This paper explores the application of Large Language Models (LLMs) in detecting scams, a critical aspect of cybersecurity. LLMs, known for their ability to process and generate natural language, are being increasingly used in security applications such as phishing detection, sentiment analysis, threat intelligence, malware analysis, and vulnerability assessment. The paper proposes a novel use case for LLMs in scam detection, including phishing, advance fee fraud, romance scams, and others. It outlines the key steps in building an effective scam detector using LLMs, including data collection, preprocessing, model selection, training, and integration into target systems. The paper also presents a preliminary evaluation using GPT-3.5 and GPT-4 on a duplicated email, showing their proficiency in identifying common signs of phishing or scam emails. The results demonstrate the models' effectiveness in recognizing suspicious elements, but the paper emphasizes the need for a comprehensive assessment across various language tasks. The paper concludes that while both GPT-3.5 and GPT-4 perform well in this specific analysis, a more comprehensive assessment is needed to determine their relative strengths and weaknesses across various natural language understanding and generation tasks. The paper also discusses the importance of ongoing refinement and collaboration with cybersecurity experts to adapt to evolving threats.