Multilingual Instruction Tuning With Just a Pinch of Multilinguality

Multilingual Instruction Tuning With Just a Pinch of Multilinguality

21 May 2024 | Uri Shaham, Jonathan Herzig, Roee Aharoni, Idan Szpektor, Reut Tsarfaty, Matan Eyal
This paper investigates the impact of multilingual instruction tuning on the ability of large language models (LLMs) to follow instructions across multiple languages. The authors explore how multilinguality during instruction tuning affects instruction-following capabilities in different languages from the pre-training corpus. They find that even monolingual tuning using a single language can transfer some instruction-following abilities to other languages. Additionally, they demonstrate that only 40 multilingual examples integrated into an English tuning set significantly improve multilingual instruction-following, both in seen and unseen languages. The study also shows that models tuned on multilingual mixtures exhibit comparable or superior performance in multiple languages compared to monolingually tuned models, despite training on 10 times fewer examples in those languages. Furthermore, diversifying the instruction tuning set with just 2-4 languages significantly improves cross-lingual generalization. The results suggest that building massively multilingual instruction-tuned models can be achieved with a very small set of multilingual instruction-responses, making it more efficient and scalable.This paper investigates the impact of multilingual instruction tuning on the ability of large language models (LLMs) to follow instructions across multiple languages. The authors explore how multilinguality during instruction tuning affects instruction-following capabilities in different languages from the pre-training corpus. They find that even monolingual tuning using a single language can transfer some instruction-following abilities to other languages. Additionally, they demonstrate that only 40 multilingual examples integrated into an English tuning set significantly improve multilingual instruction-following, both in seen and unseen languages. The study also shows that models tuned on multilingual mixtures exhibit comparable or superior performance in multiple languages compared to monolingually tuned models, despite training on 10 times fewer examples in those languages. Furthermore, diversifying the instruction tuning set with just 2-4 languages significantly improves cross-lingual generalization. The results suggest that building massively multilingual instruction-tuned models can be achieved with a very small set of multilingual instruction-responses, making it more efficient and scalable.
Reach us at info@study.space