The political preferences of LLMs

The political preferences of LLMs

July 31, 2024 | David Rozado
This study investigates the political preferences embedded in large language models (LLMs). The author administered 11 political orientation tests to 24 state-of-the-art conversational LLMs, both closed and open source. Most of these models exhibited left-of-center political preferences when responding to questions with political connotations. However, five base models (foundation models) showed less consistent results, suggesting that their performance is not reliable for determining political preferences. The study also demonstrates that LLMs can be fine-tuned to align with specific political positions using only a small amount of politically aligned data, indicating that supervised fine-tuning (SFT) can embed political orientation into LLMs. The results show that LLMs, particularly those that have undergone SFT, tend to exhibit left-leaning political preferences. However, base models, which have not been fine-tuned, often produce incoherent or contradictory responses, making it difficult to determine their political preferences. The study also shows that political preferences can be manipulated through SFT, with models fine-tuned to be more left-leaning, right-leaning, or politically moderate. The findings suggest that as LLMs become more prevalent as information sources, their political biases could have significant societal implications. The study highlights the need for further research into the political preferences of LLMs and the potential impact of these biases on public opinion and discourse. The results also indicate that political orientation tests may not be reliable for assessing the political preferences of LLMs due to the variability in their responses and the artificial constraints of multiple-choice questions.This study investigates the political preferences embedded in large language models (LLMs). The author administered 11 political orientation tests to 24 state-of-the-art conversational LLMs, both closed and open source. Most of these models exhibited left-of-center political preferences when responding to questions with political connotations. However, five base models (foundation models) showed less consistent results, suggesting that their performance is not reliable for determining political preferences. The study also demonstrates that LLMs can be fine-tuned to align with specific political positions using only a small amount of politically aligned data, indicating that supervised fine-tuning (SFT) can embed political orientation into LLMs. The results show that LLMs, particularly those that have undergone SFT, tend to exhibit left-leaning political preferences. However, base models, which have not been fine-tuned, often produce incoherent or contradictory responses, making it difficult to determine their political preferences. The study also shows that political preferences can be manipulated through SFT, with models fine-tuned to be more left-leaning, right-leaning, or politically moderate. The findings suggest that as LLMs become more prevalent as information sources, their political biases could have significant societal implications. The study highlights the need for further research into the political preferences of LLMs and the potential impact of these biases on public opinion and discourse. The results also indicate that political orientation tests may not be reliable for assessing the political preferences of LLMs due to the variability in their responses and the artificial constraints of multiple-choice questions.
Reach us at info@study.space
Understanding The political preferences of LLMs