Multi-Task Learning in Language Model for Text Classification

Universal Language Model Fine-tuning for Text Classification

Edward Ma
Towards Data Science
5 min readMar 27, 2019

--

Howard and Ruder propose a new method to enable robust transfer learning for any NLP task by using pre-training embedding, LM fine-tuning and classification fine-tuning. The sample 3-layer of LSTM architecture with same hyperparameters except different dropout…

--

--