The paper "Attention is not Explanation" by Sarah Wiegreffe and Yuval Pinter challenges the claim that attention mechanisms in neural networks (NNs) cannot be used to explain model predictions. The authors argue that the definition of explanation and the experimental setup used by Jain and Wallace (2019) are flawed. They propose four alternative tests to determine when attention can be used as an explanation: a uniform-weights baseline, variance calibration, a diagnostic framework using frozen weights, and an end-to-end adversarial attention training protocol. The results show that even when reliable adversarial distributions are found, they do not perform well in the diagnostic tests, indicating that attention mechanisms can still provide meaningful interpretations of model decisions. The authors conclude that attention mechanisms can be useful for explainability, provided that the definition of explanation is carefully considered.The paper "Attention is not Explanation" by Sarah Wiegreffe and Yuval Pinter challenges the claim that attention mechanisms in neural networks (NNs) cannot be used to explain model predictions. The authors argue that the definition of explanation and the experimental setup used by Jain and Wallace (2019) are flawed. They propose four alternative tests to determine when attention can be used as an explanation: a uniform-weights baseline, variance calibration, a diagnostic framework using frozen weights, and an end-to-end adversarial attention training protocol. The results show that even when reliable adversarial distributions are found, they do not perform well in the diagnostic tests, indicating that attention mechanisms can still provide meaningful interpretations of model decisions. The authors conclude that attention mechanisms can be useful for explainability, provided that the definition of explanation is carefully considered.