9–14 June 2013 | Chris Dyer Victor Chahuneau Noah A. Smith
The paper presents a simple log-linear reparameterization of IBM Model 2, addressing the issues of overparameterization and strong assumptions in Model 1. The proposed model is efficient for inference, likelihood evaluation, and parameter estimation, and it is significantly faster to train compared to Model 4. The authors provide efficient algorithms for computing marginal likelihoods, alignment probabilities, and partition functions, which are crucial for training and inference. Experimental results on three large-scale translation tasks show that the proposed model outperforms IBM Model 4 in terms of alignment quality and translation quality. The open-source implementation of the model is available on GitHub.The paper presents a simple log-linear reparameterization of IBM Model 2, addressing the issues of overparameterization and strong assumptions in Model 1. The proposed model is efficient for inference, likelihood evaluation, and parameter estimation, and it is significantly faster to train compared to Model 4. The authors provide efficient algorithms for computing marginal likelihoods, alignment probabilities, and partition functions, which are crucial for training and inference. Experimental results on three large-scale translation tasks show that the proposed model outperforms IBM Model 4 in terms of alignment quality and translation quality. The open-source implementation of the model is available on GitHub.