This paper introduces Meteor Universal, a metric for evaluating machine translation that supports language-specific resources and generalizes across multiple languages. Meteor Universal addresses the challenge of evaluating translation quality for languages without extensive linguistic resources by automatically extracting paraphrase tables and function word lists from training bitexts and using a universal parameter set learned from human judgments in multiple language directions. The metric is shown to significantly outperform baseline BLEU on Russian (WMT13) and Hindi (WMT14), demonstrating its effectiveness in bringing language-specific evaluation to new target languages. The paper details the Meteor scoring function, the automatic extraction of language-specific resources, the training of the universal parameter set, experimental results, and the released software.This paper introduces Meteor Universal, a metric for evaluating machine translation that supports language-specific resources and generalizes across multiple languages. Meteor Universal addresses the challenge of evaluating translation quality for languages without extensive linguistic resources by automatically extracting paraphrase tables and function word lists from training bitexts and using a universal parameter set learned from human judgments in multiple language directions. The metric is shown to significantly outperform baseline BLEU on Russian (WMT13) and Hindi (WMT14), demonstrating its effectiveness in bringing language-specific evaluation to new target languages. The paper details the Meteor scoring function, the automatic extraction of language-specific resources, the training of the universal parameter set, experimental results, and the released software.