This paper evaluates the cross-lingual knowledge alignment of large language models (LLMs) and explores the impact of multilingual pretraining and instruction tuning on this alignment. The authors propose CLiKA, a systematic framework to assess cross-lingual knowledge alignment at three levels: Performance (PF), Consistency (CT), and Conductivity (CD). The study finds that while both multilingual pretraining and instruction tuning improve cross-lingual knowledge alignment, the training strategies need to be carefully designed. Continued pretraining improves the alignment of the target language at the cost of other languages, while mixed pretraining affects other languages less. However, the overall cross-lingual knowledge alignment, especially in the conductivity level, remains unsatisfactory for all tested LLMs, and neither multilingual pretraining nor instruction tuning significantly enhances the cross-lingual knowledge conductivity. The results highlight the need for novel strategies to improve the depth of cross-lingual knowledge alignment in LLMs.This paper evaluates the cross-lingual knowledge alignment of large language models (LLMs) and explores the impact of multilingual pretraining and instruction tuning on this alignment. The authors propose CLiKA, a systematic framework to assess cross-lingual knowledge alignment at three levels: Performance (PF), Consistency (CT), and Conductivity (CD). The study finds that while both multilingual pretraining and instruction tuning improve cross-lingual knowledge alignment, the training strategies need to be carefully designed. Continued pretraining improves the alignment of the target language at the cost of other languages, while mixed pretraining affects other languages less. However, the overall cross-lingual knowledge alignment, especially in the conductivity level, remains unsatisfactory for all tested LLMs, and neither multilingual pretraining nor instruction tuning significantly enhances the cross-lingual knowledge conductivity. The results highlight the need for novel strategies to improve the depth of cross-lingual knowledge alignment in LLMs.