8-14 July 2012 | Sida Wang and Christopher D. Manning
The paper by Sida Wang and Christopher D. Manning from Stanford University explores the performance of Naive Bayes (NB) and Support Vector Machines (SVM) in text classification, particularly for sentiment analysis. The authors highlight that the performance of these models varies significantly depending on the model variant, features used, and the specific task or dataset. Key findings include:
1. **Inclusion of Bigrams**: Adding word bigram features consistently improves performance on sentiment analysis tasks.
2. **Short vs. Long Documents**: For short snippet sentiment tasks, NB outperforms SVMs, while for longer documents, the opposite is true.
3. **NBSVM Variant**: A novel SVM variant using NB log-count ratios as feature values performs well across various tasks and datasets.
4. **MNB vs. Multivariate Bernoulli NB**: Multivariate Bernoulli NB (MBNB) generally performs worse than Multinomial NB (MNB), which is more stable and effective.
5. **Robust Performance of NBSVM**: The NBSVM model, which combines NB and SVM features, is a strong and robust performer, often outperforming published results on sentiment analysis datasets.
The authors also provide a detailed experimental setup and results on multiple datasets, including short movie reviews, customer reviews, and full-length movie reviews. They conclude that NBSVM is a suitable and powerful baseline for sophisticated methods aiming to improve upon bag-of-words models.The paper by Sida Wang and Christopher D. Manning from Stanford University explores the performance of Naive Bayes (NB) and Support Vector Machines (SVM) in text classification, particularly for sentiment analysis. The authors highlight that the performance of these models varies significantly depending on the model variant, features used, and the specific task or dataset. Key findings include:
1. **Inclusion of Bigrams**: Adding word bigram features consistently improves performance on sentiment analysis tasks.
2. **Short vs. Long Documents**: For short snippet sentiment tasks, NB outperforms SVMs, while for longer documents, the opposite is true.
3. **NBSVM Variant**: A novel SVM variant using NB log-count ratios as feature values performs well across various tasks and datasets.
4. **MNB vs. Multivariate Bernoulli NB**: Multivariate Bernoulli NB (MBNB) generally performs worse than Multinomial NB (MNB), which is more stable and effective.
5. **Robust Performance of NBSVM**: The NBSVM model, which combines NB and SVM features, is a strong and robust performer, often outperforming published results on sentiment analysis datasets.
The authors also provide a detailed experimental setup and results on multiple datasets, including short movie reviews, customer reviews, and full-length movie reviews. They conclude that NBSVM is a suitable and powerful baseline for sophisticated methods aiming to improve upon bag-of-words models.