A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications

A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications

5 Feb 2024 | Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, Aman Chadha
Prompt engineering has emerged as a crucial technique for enhancing the capabilities of large language models (LLMs) and vision-language models (VLMs). This approach involves designing task-specific instructions, known as prompts, to guide model output without altering core parameters. The paper provides a structured overview of recent advancements in prompt engineering, categorized by application areas. It details various prompting methodologies, their applications, involved models, and datasets used. The strengths and limitations of each approach are discussed, and a taxonomy diagram and table summarize datasets, models, and critical points of each technique. The survey aims to bridge the gap in systematic organization and understanding of prompt engineering methods, facilitating future research by highlighting open challenges and opportunities. The paper covers a wide range of techniques, from zero-shot and few-shot prompting to advanced methods like chain-of-thought (CoT) and tree-of-thoughts (ToT) prompting, each designed to enhance specific aspects of model performance, such as reasoning, code generation, and user interface interaction. The analysis spans applications, models, and datasets, providing a comprehensive understanding of the evolving landscape of prompt engineering.Prompt engineering has emerged as a crucial technique for enhancing the capabilities of large language models (LLMs) and vision-language models (VLMs). This approach involves designing task-specific instructions, known as prompts, to guide model output without altering core parameters. The paper provides a structured overview of recent advancements in prompt engineering, categorized by application areas. It details various prompting methodologies, their applications, involved models, and datasets used. The strengths and limitations of each approach are discussed, and a taxonomy diagram and table summarize datasets, models, and critical points of each technique. The survey aims to bridge the gap in systematic organization and understanding of prompt engineering methods, facilitating future research by highlighting open challenges and opportunities. The paper covers a wide range of techniques, from zero-shot and few-shot prompting to advanced methods like chain-of-thought (CoT) and tree-of-thoughts (ToT) prompting, each designed to enhance specific aspects of model performance, such as reasoning, code generation, and user interface interaction. The analysis spans applications, models, and datasets, providing a comprehensive understanding of the evolving landscape of prompt engineering.
Reach us at info@study.space
Understanding A Systematic Survey of Prompt Engineering in Large Language Models%3A Techniques and Applications