Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception

Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception

22 Mar 2024 | Luyang Lin, Lingzhi Wang, Jinsong Guo, Kam-Fai Wong
This paper investigates the presence and nature of bias within Large Language Models (LLMs) and its impact on media bias detection. The authors explore whether LLMs exhibit biases, particularly in political bias prediction and text continuation tasks, and analyze bias across diverse topics. They propose debiasing strategies, including prompt engineering and model fine-tuning, to address these biases. The study reveals that LLMs exhibit inherent biases, such as a left-leaning political bias, and that these biases can affect the accuracy of media bias detection. The research also highlights the need for more robust and equitable AI systems by providing insights into the broader landscape of bias propagation in language models. The findings contribute to the understanding of LLM bias and offer critical implications for bias detection tasks.This paper investigates the presence and nature of bias within Large Language Models (LLMs) and its impact on media bias detection. The authors explore whether LLMs exhibit biases, particularly in political bias prediction and text continuation tasks, and analyze bias across diverse topics. They propose debiasing strategies, including prompt engineering and model fine-tuning, to address these biases. The study reveals that LLMs exhibit inherent biases, such as a left-leaning political bias, and that these biases can affect the accuracy of media bias detection. The research also highlights the need for more robust and equitable AI systems by providing insights into the broader landscape of bias propagation in language models. The findings contribute to the understanding of LLM bias and offer critical implications for bias detection tasks.
Reach us at info@study.space