Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation

Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation

3 Sep 2014 | Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio
This paper introduces a novel neural network architecture called the RNN Encoder-Decoder, which consists of two recurrent neural networks (RNNs). One RNN encodes a sequence of symbols into a fixed-length vector representation, while the other decodes this representation back into another sequence of symbols. The encoder and decoder are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is improved by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as additional features in the existing log-linear model. Qualitatively, the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases. The RNN Encoder-Decoder is evaluated on the task of translating from English to French, and it is found to improve translation performance. The model is also analyzed to show that it captures linguistic regularities in the phrase table and learns a continuous space representation of phrases that preserves both semantic and syntactic structure.This paper introduces a novel neural network architecture called the RNN Encoder-Decoder, which consists of two recurrent neural networks (RNNs). One RNN encodes a sequence of symbols into a fixed-length vector representation, while the other decodes this representation back into another sequence of symbols. The encoder and decoder are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is improved by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as additional features in the existing log-linear model. Qualitatively, the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases. The RNN Encoder-Decoder is evaluated on the task of translating from English to French, and it is found to improve translation performance. The model is also analyzed to show that it captures linguistic regularities in the phrase table and learns a continuous space representation of phrases that preserves both semantic and syntactic structure.
Reach us at info@study.space
[slides and audio] Learning Phrase Representations using RNN Encoder%E2%80%93Decoder for Statistical Machine Translation