This paper presents a novel methodology for causal inference that integrates statistical causal discovery (SCD) methods with large language models (LLMs) through "statistical causal prompting (SCP)" and prior knowledge augmentation. The authors propose a method to combine LLM-based knowledge-based causal inference (KBCI) with SCD, aiming to improve the accuracy and reliability of causal models. The key contributions include:
1. **Synthesis of LLM-KBCI and SCD**: The proposed method integrates LLM-KBCI and SCD in a mutually-referential manner, enhancing both processes.
2. **Performance Enhancement**: Experiments demonstrate that the augmentation of SCD with LLM-KBCI improves the performance of both SCD and LLM-KBCI.
3. **Enhanced SCD Performance**: The output of SCD algorithms augmented with LLM-KBCI, even without prior knowledge, can be superior to those without SCP.
4. **Robustness to datasets not in LLM pre-training**: The method effectively improves SCD results even when the dataset used is not included in the LLM's pre-training data.
The study uses benchmark datasets and an unpublished real-world dataset to validate the approach. Key findings include:
- **GPT-4's Role**: GPT-4 can significantly improve the accuracy of SCD results by providing domain-specific knowledge.
- **Dependence on Dataset and SCD Methods**: The effectiveness of SCP varies depending on the number of variables and the specific SCD method used.
- **Health Screening Dataset**: The proposed method successfully aids SCD in a dataset not included in GPT-4's pre-training, demonstrating its robustness and practical utility.
The paper concludes by discussing the broader impact of the proposed approach, emphasizing its potential to enhance causal inference in various scientific fields while addressing ethical considerations.This paper presents a novel methodology for causal inference that integrates statistical causal discovery (SCD) methods with large language models (LLMs) through "statistical causal prompting (SCP)" and prior knowledge augmentation. The authors propose a method to combine LLM-based knowledge-based causal inference (KBCI) with SCD, aiming to improve the accuracy and reliability of causal models. The key contributions include:
1. **Synthesis of LLM-KBCI and SCD**: The proposed method integrates LLM-KBCI and SCD in a mutually-referential manner, enhancing both processes.
2. **Performance Enhancement**: Experiments demonstrate that the augmentation of SCD with LLM-KBCI improves the performance of both SCD and LLM-KBCI.
3. **Enhanced SCD Performance**: The output of SCD algorithms augmented with LLM-KBCI, even without prior knowledge, can be superior to those without SCP.
4. **Robustness to datasets not in LLM pre-training**: The method effectively improves SCD results even when the dataset used is not included in the LLM's pre-training data.
The study uses benchmark datasets and an unpublished real-world dataset to validate the approach. Key findings include:
- **GPT-4's Role**: GPT-4 can significantly improve the accuracy of SCD results by providing domain-specific knowledge.
- **Dependence on Dataset and SCD Methods**: The effectiveness of SCP varies depending on the number of variables and the specific SCD method used.
- **Health Screening Dataset**: The proposed method successfully aids SCD in a dataset not included in GPT-4's pre-training, demonstrating its robustness and practical utility.
The paper concludes by discussing the broader impact of the proposed approach, emphasizing its potential to enhance causal inference in various scientific fields while addressing ethical considerations.