Sentiment Analysis with Python (Part 2)

Improving a Movie Review Sentiment Classifier

Aaron Kub
Towards Data Science

--

Photo by izayah ramos on Unsplash

In part one of this series we built a barebones movie review sentiment classifier. The goal of this next post is to provide an overview of several techniques that can be used to enhance an NLP model. That being said, we won’t go too far in depth on any particular topic.

If you haven’t yet, you can read my first post here:

All of the code used in this series along with supplemental materials can be found in this GitHub Repository.

Text Processing

For our first iteration we did very basic text processing like removing punctuation and HTML tags and making everything lower-case. We can clean things up further by removing stop words and normalizing the text.

To make these transformations we’ll use libraries from the Natural Language Toolkit (NLTK). This is a very popular NLP library for Python.

Removing Stop Words

Stop words are the very common words like ‘if’, ‘but’, ‘we’, ‘he’, ‘she’, and ‘they’. We can usually remove these words without changing the semantics of a text and doing so often (but not always) improves the performance of a model. Removing these stop words becomes a lot more useful when we start using longer word sequences as model features (see n-grams below).

Before

"bromwell high is a cartoon comedy it ran at the same time as some other programs about school life such as teachers my years in the teaching profession lead me to believe that bromwell high’s satire is much closer to reality than is teachers the scramble to survive financially the insightful students who can see right through their pathetic teachers’ pomp the pettiness of the whole situation all remind me of the schools i knew and their students when i saw the episode in which a student repeatedly tried to burn down the school i immediately recalled at high a classic line inspector i’m here to sack one of your teachers student welcome to bromwell high i expect that many adults of my age think that bromwell high is far fetched what a pity that it isn’t"

After

"bromwell high cartoon comedy ran time programs school life teachers years teaching profession lead believe bromwell high's satire much closer reality teachers scramble survive financially insightful students see right pathetic teachers' pomp pettiness whole situation remind schools knew students saw episode student repeatedly tried burn school immediately recalled high classic line inspector i'm sack one teachers student welcome bromwell high expect many adults age think bromwell high far fetched pity"

Note: In practice, an easier way to remove stop words is to just use the stop_words argument with any of scikit-learn’s ‘Vectorizer’ classes. If you want to use NLTK’s full list of stop words you can do stop_words='english’. In practice I’ve found that using NLTK’s list actually decreases my performance because its too expansive, so I usually supply my own list of words. For example, stop_words=['in','of','at','a','the'] .

Normalization

A common next step in text preprocessing is to normalize the words in your corpus by trying to convert all of the different forms of a given word into one. Two methods that exist for this are Stemming and Lemmatization.

Stemming

Stemming is considered to be the more crude/brute-force approach to normalization (although this doesn’t necessarily mean that it will perform worse). There’s several algorithms, but in general they all use basic rules to chop off the ends of words.

NLTK has several stemming algorithm implementations. We’ll use the Porter stemmer here but you can explore all of the options with examples here: NLTK Stemmers

Lemmatization

Lemmatization works by identifying the part-of-speech of a given word and then applying more complex rules to transform the word into its true root.

Results

No Normalization

"this is not the typical mel brooks film it was much less slapstick than most of his movies and actually had a plot that was followable leslie ann warren made the movie she is such a fantastic under rated actress there were some moments that could have been fleshed out a bit more and some scenes that could probably have been cut to make the room to do so but all in all this is worth the price to rent and see it the acting was good overall brooks himself did a good job without his characteristic speaking to directly to the audience again warren was the best actor in the movie but fume and sailor both played their parts well"

Stemmed

"thi is not the typic mel brook film it wa much less slapstick than most of hi movi and actual had a plot that wa follow lesli ann warren made the movi she is such a fantast under rate actress there were some moment that could have been flesh out a bit more and some scene that could probabl have been cut to make the room to do so but all in all thi is worth the price to rent and see it the act wa good overal brook himself did a good job without hi characterist speak to directli to the audienc again warren wa the best actor in the movi but fume and sailor both play their part well"

Lemmatized

"this is not the typical mel brook film it wa much le slapstick than most of his movie and actually had a plot that wa followable leslie ann warren made the movie she is such a fantastic under rated actress there were some moment that could have been fleshed out a bit more and some scene that could probably have been cut to make the room to do so but all in all this is worth the price to rent and see it the acting wa good overall brook himself did a good job without his characteristic speaking to directly to the audience again warren wa the best actor in the movie but fume and sailor both played their part well"

n-grams

Last time we used only single word features in our model, which we call 1-grams or unigrams. We can potentially add more predictive power to our model by adding two or three word sequences (bigrams or trigrams) as well. For example, if a review had the three word sequence “didn’t love movie” we would only consider these words individually with a unigram-only model and probably not capture that this is actually a negative sentiment because the word ‘love’ by itself is going to be highly correlated with a positive review.

The scikit-learn library makes this really easy to play around with. Just use the ngram_range argument with any of the ‘Vectorizer’ classes.

Getting pretty close to 90%! So, simply considering 2-word sequences in addition to single words increased our accuracy by more than 1.6 percentage points.

Note: There’s technically no limit on the size that n can be for your model, but there are several things to consider. First, increasing the number of grams will not necessarily give you better performance. Second, the size of your matrix grows exponentially as you increment n, so if you have a large corpus that is comprised of large documents your model may take a very long time to train.

Representations

In part 1 we represented each review as a binary vector (1s and 0s) with a slot/column for every unique word in our corpus, where 1 represents that a given word was in the review.

While this simple approach can work very well, there are ways that we can encode more information into the vector.

Word Counts

Instead of simply noting whether a word appears in the review or not, we can include the number of times a given word appears. This can give our sentiment classifier a lot more predictive power. For example, if a movie reviewer says ‘amazing’ or ‘terrible’ multiple times in a review it is considerably more probable that the review is positive or negative, respectively.

TF-IDF

Another common way to represent each document in a corpus is to use the tf-idf statistic (term frequency-inverse document frequency) for each word, which is a weighting factor that we can use in place of binary or word count representations.

There are several ways to do tf-idf transformation but in a nutshell, tf-idf aims to represent the number of times a given word appears in a document (a movie review in our case) relative to the number of documents in the corpus that the word appears in — where words that appear in many documents have a value closer to zero and words that appear in less documents have values closer to 1.

Note: Now that we’ve gone over n-grams, when I refer to ‘words’ I really mean any n-gram (sequence of words) if the model is using an n greater than one.

Algorithms

So far we’ve chosen to represent each review as a very sparse vector (lots of zeros!) with a slot for every unique n-gram in the corpus (minus n-grams that appear too often or not often enough). Linear classifiers typically perform better than other algorithms on data that is represented in this way.

Support Vector Machines (SVM)

Recall that linear classifiers tend to work well on very sparse datasets (like the one we have). Another algorithm that can produce great results with a quick training time are Support Vector Machines with a linear kernel.

Here’s an example with an n-gram range from 1 to 2:

There are many great explanations of Support Vector Machines that do a much better job than I could. If you’re interested in learning more, this is a great tutorial:

Final Model

The goal of this post was to give you a toolbox of things to try and mix together when trying to find the right model + data transformation for your project. I found that removing a small set of stop words along with an n-gram range from 1 to 3 and a linear support vector classifier gave me the best results.

We broke the 90% mark!

Summary

We’ve gone over several options for transforming text that can improve the accuracy of an NLP model. Which combination of these techniques will yield the best results will depend on the task, data representation, and algorithms you choose. It’s always a good idea to try out many different combinations to see what works.

I’m very confidant a higher accuracy on this data can be attained with a different combination of the things outlined in this post. I’ll leave that for a more ambitious reader. :) Please comment with your results and method!

Next Time

The next parts of this series will explore deep learning approaches to building a sentiment classifier.

--

--