NLP Learning Series (Part 4)
Transfer Learning Intuition for Text Classification
Making Machines read for us
This post is the fourth post of the NLP Text classification series. To give you a recap, I started up with an NLP text classification competition on Kaggle called Quora Question insincerity challenge. So I thought to share the knowledge via a series of blog posts on text classification. The first post talked about the different preprocessing techniques that work with Deep learning models and increasing embeddings coverage. In the second post, I talked through some basic conventional models like TFIDF, Count Vectorizer, Hashing, etc. that have been used in text classification and tried to access their performance to create a baseline. In the third post, I delved deeper into Deep learning models and the various architectures we could use to solve the text Classification problem. In this post, I will try to use ULMFit model which is a transfer learning approach for NLP.
As a side note: if you want to know more about NLP, I would like to recommend this excellent course on Natural Language Processing in the Advanced machine learning specialization. You can start for free with the 7-day Free Trial. This course covers a wide range of tasks in Natural Language Processing from basic to advanced: sentiment analysis…