Puzzle Solving using Reasoning of Large Language Models: A Survey

Puzzle Solving using Reasoning of Large Language Models: A Survey

20 Apr 2024 | Panagiotis Giadikiaroglou, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou
This survey explores the capabilities of Large Language Models (LLMs) in solving puzzles, categorizing them into rule-based and rule-less categories. It assesses various methodologies, including prompting techniques, neuro-symbolic approaches, and fine-tuning, through a critical review of relevant datasets and benchmarks. The survey highlights significant challenges in complex puzzle scenarios, particularly in advanced logical inference, and emphasizes the need for novel strategies and richer datasets to advance LLMs' puzzle-solving proficiency. Key contributions include a distinction between rule-based and rule-less puzzles, an analysis of puzzle-solving methodologies, a detailed exploration of existing benchmarks, and a discussion of current obstacles and future research directions. The survey underscores the gap between LLM capabilities and human-like reasoning, especially in complex logical reasoning tasks, and suggests that more sophisticated methods and datasets are necessary to bridge this gap.This survey explores the capabilities of Large Language Models (LLMs) in solving puzzles, categorizing them into rule-based and rule-less categories. It assesses various methodologies, including prompting techniques, neuro-symbolic approaches, and fine-tuning, through a critical review of relevant datasets and benchmarks. The survey highlights significant challenges in complex puzzle scenarios, particularly in advanced logical inference, and emphasizes the need for novel strategies and richer datasets to advance LLMs' puzzle-solving proficiency. Key contributions include a distinction between rule-based and rule-less puzzles, an analysis of puzzle-solving methodologies, a detailed exploration of existing benchmarks, and a discussion of current obstacles and future research directions. The survey underscores the gap between LLM capabilities and human-like reasoning, especially in complex logical reasoning tasks, and suggests that more sophisticated methods and datasets are necessary to bridge this gap.
Reach us at info@study.space