Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting

Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting

28 Jan 2024 | Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin
This study investigates the impact of Chain-of-Thought (CoT) prompting on gender bias in large language models (LLMs) during unscalable tasks. The authors construct a benchmark task where LLMs are required to count the number of feminine and masculine words in a list, a task that involves both arithmetic and symbolic reasoning. Without CoT, LLMs often produce socially biased predictions, but CoT prompts significantly reduce this bias by encouraging fair predictions. The study also evaluates the effectiveness of CoT in downstream tasks such as QA and NLI, finding that it generally outperforms other debiasing methods. The results highlight the potential of CoT in mitigating gender bias in LLMs, particularly in unscalable tasks. However, the study notes limitations, such as the need for larger models and additional training to achieve effective debiasing, and suggests future work on extending CoT to non-binary genders and other types of social biases.This study investigates the impact of Chain-of-Thought (CoT) prompting on gender bias in large language models (LLMs) during unscalable tasks. The authors construct a benchmark task where LLMs are required to count the number of feminine and masculine words in a list, a task that involves both arithmetic and symbolic reasoning. Without CoT, LLMs often produce socially biased predictions, but CoT prompts significantly reduce this bias by encouraging fair predictions. The study also evaluates the effectiveness of CoT in downstream tasks such as QA and NLI, finding that it generally outperforms other debiasing methods. The results highlight the potential of CoT in mitigating gender bias in LLMs, particularly in unscalable tasks. However, the study notes limitations, such as the need for larger models and additional training to achieve effective debiasing, and suggests future work on extending CoT to non-binary genders and other types of social biases.
Reach us at info@study.space
[slides and audio] Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting