This paper introduces the Cross-Lingual Model Editing (XME) paradigm, which involves updating a fact in one language and observing the subsequent propagation of the update across other languages. The authors conduct experiments using BLOOM, mBERT, and XLM-RoBERTa models with two writing scripts: Latin (English, French, and Spanish) and Indic (Hindi, Gujarati, and Bengali). The results reveal significant performance limitations of state-of-the-art Model-Editing Techniques (METs) under the XME setting, particularly when the involved languages belong to different script families. The study highlights the need for further research and development of XME techniques to address these challenges. The paper also explores the effectiveness of hypernetwork-based editing techniques, the storage patterns of factual knowledge in different model architectures, and the impact of language selection during initial fine-tuning on editing performance. The findings suggest that different architectures store factual knowledge at different locations, and the initial fine-tuning language selection significantly affects editing performance. The paper concludes by discussing future directions, including the use of parameter-preserving and localized editing techniques and expanding investigations to other NLP tasks.This paper introduces the Cross-Lingual Model Editing (XME) paradigm, which involves updating a fact in one language and observing the subsequent propagation of the update across other languages. The authors conduct experiments using BLOOM, mBERT, and XLM-RoBERTa models with two writing scripts: Latin (English, French, and Spanish) and Indic (Hindi, Gujarati, and Bengali). The results reveal significant performance limitations of state-of-the-art Model-Editing Techniques (METs) under the XME setting, particularly when the involved languages belong to different script families. The study highlights the need for further research and development of XME techniques to address these challenges. The paper also explores the effectiveness of hypernetwork-based editing techniques, the storage patterns of factual knowledge in different model architectures, and the impact of language selection during initial fine-tuning on editing performance. The findings suggest that different architectures store factual knowledge at different locations, and the initial fine-tuning language selection significantly affects editing performance. The paper concludes by discussing future directions, including the use of parameter-preserving and localized editing techniques and expanding investigations to other NLP tasks.