This survey explores the integration of causal inference with Large Language Models (LLMs) to enhance their predictive accuracy, fairness, robustness, and explainability. It focuses on evaluating and improving LLMs from a causal perspective, addressing issues such as reasoning capacity, fairness, safety, and multimodality. The survey also examines how LLMs can contribute to causal inference by aiding in the discovery of causal relationships and the estimation of causal effects. Key topics include:
1. **Model Understanding**: Evaluating LLMs' reasoning abilities and understanding their causal knowledge.
2. **Fairness and Safety**: Addressing biases and ensuring the reliability and safety of LLMs.
3. **Explainability**: Enhancing transparency and trustworthiness through causal explanations.
4. **Multimodality**: Handling multi-modal inputs and outputs, particularly in visual language models.
The survey provides an overview of recent advancements in LLMs, including the introduction of Transformer-based models and their applications in various NLP tasks. It also reviews causal inference methods, such as potential outcomes, graphical models, and structural equations, and discusses how these methods can be applied to improve LLMs. Additionally, the survey highlights existing benchmarks and evaluation metrics for assessing LLMs from a causal perspective, and it explores the potential of LLMs in extending the boundaries of causal inference.This survey explores the integration of causal inference with Large Language Models (LLMs) to enhance their predictive accuracy, fairness, robustness, and explainability. It focuses on evaluating and improving LLMs from a causal perspective, addressing issues such as reasoning capacity, fairness, safety, and multimodality. The survey also examines how LLMs can contribute to causal inference by aiding in the discovery of causal relationships and the estimation of causal effects. Key topics include:
1. **Model Understanding**: Evaluating LLMs' reasoning abilities and understanding their causal knowledge.
2. **Fairness and Safety**: Addressing biases and ensuring the reliability and safety of LLMs.
3. **Explainability**: Enhancing transparency and trustworthiness through causal explanations.
4. **Multimodality**: Handling multi-modal inputs and outputs, particularly in visual language models.
The survey provides an overview of recent advancements in LLMs, including the introduction of Transformer-based models and their applications in various NLP tasks. It also reviews causal inference methods, such as potential outcomes, graphical models, and structural equations, and discusses how these methods can be applied to improve LLMs. Additionally, the survey highlights existing benchmarks and evaluation metrics for assessing LLMs from a causal perspective, and it explores the potential of LLMs in extending the boundaries of causal inference.