This paper presents a siamese adaptation of the Long Short-Term Memory (LSTM) network for learning semantic similarity between sentences. The authors propose a model that uses LSTM to process pairs of variable-length sequences, with the goal of capturing rich semantic relationships. The model is trained on labeled data and uses word-embedding vectors supplemented with synonymic information to encode the underlying meaning of sentences. By restricting operations to a simple Manhattan metric, the model learns a highly structured space that reflects complex semantic relationships. The results demonstrate that the proposed model outperforms state-of-the-art methods, including handcrafted features and complex neural network systems, in evaluating sentence similarity. The paper also discusses the geometry of the sentence representation space and its applications in textual entailment classification, highlighting the interpretability and practical utility of the learned representations.This paper presents a siamese adaptation of the Long Short-Term Memory (LSTM) network for learning semantic similarity between sentences. The authors propose a model that uses LSTM to process pairs of variable-length sequences, with the goal of capturing rich semantic relationships. The model is trained on labeled data and uses word-embedding vectors supplemented with synonymic information to encode the underlying meaning of sentences. By restricting operations to a simple Manhattan metric, the model learns a highly structured space that reflects complex semantic relationships. The results demonstrate that the proposed model outperforms state-of-the-art methods, including handcrafted features and complex neural network systems, in evaluating sentence similarity. The paper also discusses the geometry of the sentence representation space and its applications in textual entailment classification, highlighting the interpretability and practical utility of the learned representations.