1 Apr 2019 | Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli
FAIRSEQ is an open-source sequence modeling toolkit designed for training custom models on tasks such as translation, summarization, and language modeling. It is built on PyTorch and supports distributed training across multiple GPUs and machines, along with fast mixed-precision training and inference on modern GPUs. The toolkit includes a common interface for models and tasks, efficient distributed and mixed precision training, state-of-the-art implementations and pre-trained models, and optimized inference with various search algorithms. FAIRSEQ is designed to be fast, extensible, and suitable for both research and production environments. It has been used in numerous applications, including machine translation, language modeling, abstractive document summarization, and more. The toolkit is available under a BSD license on GitHub.FAIRSEQ is an open-source sequence modeling toolkit designed for training custom models on tasks such as translation, summarization, and language modeling. It is built on PyTorch and supports distributed training across multiple GPUs and machines, along with fast mixed-precision training and inference on modern GPUs. The toolkit includes a common interface for models and tasks, efficient distributed and mixed precision training, state-of-the-art implementations and pre-trained models, and optimized inference with various search algorithms. FAIRSEQ is designed to be fast, extensible, and suitable for both research and production environments. It has been used in numerous applications, including machine translation, language modeling, abstractive document summarization, and more. The toolkit is available under a BSD license on GitHub.