A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications

A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications

5 Feb 2024 | Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, Aman Chadha
This survey provides a systematic overview of prompt engineering techniques in large language models (LLMs) and vision-language models (VLMs), categorizing 29 distinct methods based on their applications. Prompt engineering involves designing task-specific instructions (prompts) to enhance model performance without altering model parameters. The paper discusses various prompting approaches, including zero-shot, few-shot, chain-of-thought (CoT), self-consistency, logical chain-of-thought (LogiCoT), chain-of-symbol (CoS), tree-of-thoughts (ToT), graph-of-thoughts (GoT), system 2 attention (S2A), thread of thought (ThoT), chain-of-table, and others. These techniques aim to improve reasoning, reduce hallucinations, and enhance model consistency and coherence. The survey also covers methods for improving model robustness, such as retrieval augmented generation (RAG), react prompting, chain-of-verification (CoVe), and chain-of-note (CoN). Additionally, it explores code generation and execution techniques like scratchpad prompting, program-of-thoughts (PoT), structured chain-of-thought (SCoT), and chain-of-code (CoC). The paper also addresses optimization through prompting (OPRO), understanding user intent with rephrase and respond (RaR) prompting, and metacognition with take a step back prompting. The survey highlights the strengths and limitations of each technique, providing a taxonomy diagram and table summarizing datasets, models, and critical points of each prompting technique. The analysis underscores the importance of prompt engineering in enhancing the adaptability and performance of LLMs across diverse applications. The paper concludes that prompt engineering is a transformative force in AI, with ongoing research aiming to address challenges such as biases, factual inaccuracies, and interpretability gaps. Future directions include exploring meta-learning and hybrid prompting architectures to further enhance model capabilities while ensuring ethical development and deployment.This survey provides a systematic overview of prompt engineering techniques in large language models (LLMs) and vision-language models (VLMs), categorizing 29 distinct methods based on their applications. Prompt engineering involves designing task-specific instructions (prompts) to enhance model performance without altering model parameters. The paper discusses various prompting approaches, including zero-shot, few-shot, chain-of-thought (CoT), self-consistency, logical chain-of-thought (LogiCoT), chain-of-symbol (CoS), tree-of-thoughts (ToT), graph-of-thoughts (GoT), system 2 attention (S2A), thread of thought (ThoT), chain-of-table, and others. These techniques aim to improve reasoning, reduce hallucinations, and enhance model consistency and coherence. The survey also covers methods for improving model robustness, such as retrieval augmented generation (RAG), react prompting, chain-of-verification (CoVe), and chain-of-note (CoN). Additionally, it explores code generation and execution techniques like scratchpad prompting, program-of-thoughts (PoT), structured chain-of-thought (SCoT), and chain-of-code (CoC). The paper also addresses optimization through prompting (OPRO), understanding user intent with rephrase and respond (RaR) prompting, and metacognition with take a step back prompting. The survey highlights the strengths and limitations of each technique, providing a taxonomy diagram and table summarizing datasets, models, and critical points of each prompting technique. The analysis underscores the importance of prompt engineering in enhancing the adaptability and performance of LLMs across diverse applications. The paper concludes that prompt engineering is a transformative force in AI, with ongoing research aiming to address challenges such as biases, factual inaccuracies, and interpretability gaps. Future directions include exploring meta-learning and hybrid prompting architectures to further enhance model capabilities while ensuring ethical development and deployment.
Reach us at info@study.space