This paper addresses the challenge of target-dependent sentiment classification, which involves inferring the sentiment polarity of a sentence towards a specific target word. The authors develop two long short-term memory (LSTM) models that incorporate target information to improve the classification accuracy. The first model, Target-Dependent LSTM (TD-LSTM), models the semantic relatedness between the target word and its context words by using separate LSTM networks for the preceding and following contexts. The second model, Target-Connection LSTM (TC-LSTM), extends TD-LSTM by explicitly capturing the connections between the target word and each context word. The models are trained end-to-end using cross-entropy loss and evaluated on a benchmark dataset from Twitter. Empirical results show that incorporating target information into LSTM significantly boosts classification accuracy, outperforming standard LSTM and other baseline methods without using syntactic parsers or external sentiment lexicons. The proposed models achieve state-of-the-art performance in target-dependent sentiment classification.This paper addresses the challenge of target-dependent sentiment classification, which involves inferring the sentiment polarity of a sentence towards a specific target word. The authors develop two long short-term memory (LSTM) models that incorporate target information to improve the classification accuracy. The first model, Target-Dependent LSTM (TD-LSTM), models the semantic relatedness between the target word and its context words by using separate LSTM networks for the preceding and following contexts. The second model, Target-Connection LSTM (TC-LSTM), extends TD-LSTM by explicitly capturing the connections between the target word and each context word. The models are trained end-to-end using cross-entropy loss and evaluated on a benchmark dataset from Twitter. Empirical results show that incorporating target information into LSTM significantly boosts classification accuracy, outperforming standard LSTM and other baseline methods without using syntactic parsers or external sentiment lexicons. The proposed models achieve state-of-the-art performance in target-dependent sentiment classification.