9 Mar 2017 | Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou & Yoshua Bengio
This paper introduces a novel model for generating interpretable sentence embeddings using self-attention. Unlike traditional methods that use a single vector to represent an embedding, the proposed model uses a 2-D matrix where each row of the matrix attends to a different part of the sentence. The model includes a self-attention mechanism and a regularization term to encourage diversity in the attention weights across different parts of the sentence. This approach allows for easier visualization of what specific parts of the sentence are encoded into the embedding. The model is evaluated on three tasks: author profiling, sentiment classification, and textual entailment, showing significant performance improvements over other sentence embedding methods. The paper also discusses related work, experimental results, and exploratory experiments to validate the effectiveness of the proposed model.This paper introduces a novel model for generating interpretable sentence embeddings using self-attention. Unlike traditional methods that use a single vector to represent an embedding, the proposed model uses a 2-D matrix where each row of the matrix attends to a different part of the sentence. The model includes a self-attention mechanism and a regularization term to encourage diversity in the attention weights across different parts of the sentence. This approach allows for easier visualization of what specific parts of the sentence are encoded into the embedding. The model is evaluated on three tasks: author profiling, sentiment classification, and textual entailment, showing significant performance improvements over other sentence embedding methods. The paper also discusses related work, experimental results, and exploratory experiments to validate the effectiveness of the proposed model.