Deep Learning Models for Automatic Summarization

The Next Big Thing in NLP?

Pirmin Lemberger
Towards Data Science

--

source : author

PDF version on arXiv

Perhaps the most helpful NLP task of all

For over a quarter of century we have been able to search the web by querying a search engine using a couple of relevant keywords. Without such a tool the internet would be nothing but useless garbage dump of data. In 1998 Google’s PageRank algorithm redefined what we can expect as far as relevance of search results is concerned. More recently some semantic processing has been added to the wizardry that helps the engine to interpret a query that was expressed in plain language. In a not too distant future we may perhaps pinpoint documents by engaging a short Q\&A kind of conversation with a search engine, just as we would with a bookseller. There is an important difference though between a bookseller and a search engine. If you are hesitant about which book you should read you could try to ask the bookseller to summarize it for you in few of sentences.

This kind summarization task has long seemed totally out of reach within the classic rule-based NLP approaches and neither was it not considered realistic in foreseeable future. But, slowly, things are now changing with recent progress in Deep Learning models for NLP. For the moment just imagine you had a drop down list next to the input field of your favorite search engine that would allow you to set the length of an automatic summary for a given document. Say, a 1 sentence, a 10 sentences or a one page summary. Would that be helpful? Actually it is quite possible that it could quickly prove so useful that it could become ubiquitous. Besides improving document search it could also help in a multitude of other tasks. For instance it could help scientists keep up with a dizzying flow of publications in fields like medicine or AI. More prosaically it could help producing short product descriptions for online stores with catalogues too large to be handled manually. Many more examples of applications of automatic summarization are described for instance here.

For larger documents with several hundreds of pages like novels such generic summarization tools still belong to the realm of science fiction. However, thanks to the ever surprising flexibility of Deep Learning models, the wait may not be that long for tools that could summarize one- or two-page documents in a few sentences, at least within specific areas of knowledge. The aim of this article is to describe recent data sets [1, 2] and Deep Learning architectures [3, 4, 5] that have brought us a little closer to the goal.

A difficult task

The summarizing task is difficult for a number of reasons, some of which are common with other NLP tasks like translation for instance:

  • For a given document there is no summary which is objectively the best. As a general rule, many of them that would be judged equally good by a human.
  • It is hard to define precisely what a good summary is and what score we should use for its evaluation.
  • Good training data has long been scarce and expensive to collect.

Human evaluation of a summary is subjective and involves judgments like style, coherence, completeness and readability. Unfortunately no score is currently known which is both easy to compute and faithful to human judgment. The ROUGE score [6] is the best we have but it has obvious shortcomings as we shall see. ROUGE simply counts the number of words, or n-grams, that are common to the summary produced by a machine and a reference summary written by a human. More precisely it reports a combination of the corresponding recall:

and precision:

The combination reported in ROUGE-n is their geometric mean (known as the F1 score). Although the ROUGE score does not faithfully reflect a human judgment it has the advantage of computational simplicity and it takes into account some of the flexibility associated with the multiple summaries that could result by rearranging words within a valid summary.

There are two types of summarization systems:

  • Extractive summarization systems select a number of segments from the source document to make up a summary. The advantage of this approach is that the resulting summary is guaranteed to be grammatically correct. In general extractive systems achiever high ROUGE scores and are more reliable than the option we discuss next.
  • Abstractive summarization systems on the other hand generate their own words and sentences to reformulate the meaning of the source as a human writer would do. They can be viewed as compression systems that attempt to preserve meaning. This latter kind of systems is obviously more difficult to develop because it involves the ability to paraphrase information and to include external knowledge.

We will describe instances of both kinds below.

More and better Data

Until recently the main data set used for training summarization models was the CNN / Daily Mail data set which contains 300,000 examples of news article paired with their multiline summary. A detailed examination [1], however, has revealed various limitations in this data set that could bias the evaluation of the ability of a system to perform text summarization. It turned out for instance that useful information is spread unevenly across the source, namely mostly at the beginning of the documents. Moreover, many summaries contain large fragments of the source. This is certainly not the best way for teaching a system how to produce good abstractive summaries.

But things have changed recently. The BigPatent dataset [1] for instance contains 1.3 millions patent documents together with their summaries that alleviate most of the above shortcomings.

A novel approach to produce ever growing data sets for training summarization models uses video transcripts of talks given at international scientific conferences. The basic assumption here is that these transcripts make a good starting point for producing high quality summaries of scientific papers. The transcript itself is not directly the summary of a paper. Rather, the authors of the TalkSumm method [2] propose to create a summary by retrieving a sequence of relevant sentences from the paper presented in the talk. A sentence is deemed relevant depending on how many words the speaker uses to describe it in her talk, assuming that she has a given sentence of the paper in mind at any given point in time.

Clever architectures and improved cost functions

In this section we describe 3 neural network models that have be developed recently for the summarization task. The aim here is not completeness of course but merely to illustrate the diversity of ideas which have been proposed to tackle this fundamental NLP problem.

The basic neural network architectures that make it possible to learn this kind of task are the Seq2Seq architectures, the LSTM recurrent neural networks (RNN), the BERT and the Transformer models as well as the attention mechanism.

Figure 1: Basic Seq2Seq encoder-decoder architecture with attention. The x_i are the input token embeddings, the a_i^t are the attention weights at step t, the h_i are the context vectors, h^t is the sentence embedding at step t obtained by weighting the context vectors with the attention weights, s_i are the decoder states, x’_i are the embeddings of the generated token (at inference time) or ground truth tokens (at training time when using teacher forcing). At last p^t_vocab is the probability distribution at time t over a fixed vocabulary, (source : author).

For the readers unfamiliar with any of these topics we recommend the above links which will provide excellent introductions to each of them. Figure 1 represents the Seq2Seq architecture which converts a sequence of tokens into another sequence with a possibly different length. It defines the vectors we will refer to when talking about Seq2Seq.

Figure 2: BERT as the encoder part of the Transformer architecture. The core idea behind the Transformer is a smart implementation of attention mechanism that allows computations to be parallelized efficiently on GPU’s, something that was not possible with classical RNN. Each input vector x_j is a sum of a token embedding and a position embedding. The outputs h_i are context aware token embeddings (source : author).

Figure 2 sketches a Transformer network with the self-attention dependencies between embeddings an hidden vectors. Roughly speaking a transformer converts a sequence of token embeddings x_i into another sequence of context aware embeddings h_i. The input vectors x_i also typically include positional information. This is needed in contrast to RNN networks because of the permutation symmetry of inputs in a Transformer.

Summarizing without stuttering

The first architecture we present addresses the abstractive summarization task [3]. Early attempts to apply vanilla Seq2Seq architectures to the summarization revealed a number of issues with this straightforward approach:

  • Factual details in the source document, like dates, locations or phone numbers, were often reproduced incorrectly in the summary.
  • A finite vocabulary prevents some words like proper names from being taken into accounts.
  • Unnecessary repetitions of source fragments often occur, in other words the model tends to stutter.

Figure 3 shows examples of these unwanted behaviors. The authors in [3] propose two improvements over the vanilla Seq2Seq with attention mechanism to mitigate these shortcomings.

Figure 3: The last section ‘‘Pointer-Gen + Coverage’’ contains the output of the system proposed in [3]. The fragments used in the summary are shown in blue, factual errors in red and unnecessary repetitions in green (source).

First, to overcome the finite vocabulary limitation they allow the network to copy a word directly from the source and use it in the summary when needed. The precise mechanism to do this is called a pointer network. Remember that in a vanilla Seq2Seq network the decoder computes a probability distribution p^t_vocab(w), at each time step t, over the words w in a fixed finite vocabulary. As usual p^t_vocab is computed with a softmax layer that takes the attention context vector h^t and the decoder state s_t as inputs. In a pointer network an additional copy probability p_copy is computed which represents the probability that a word should be copied from the source rather than generated by the decoder. The probability p_copy is computed using a sigmoid layer having h^t, s_t and x_t vectors as inputs (see figure 1). Which word should actually be copied is determined by the attention weights a_i^t that the decoder puts at time t to each word w_i in the source. Putting it all together, the full probability for the model to produce the word w is thus given by the following mixture:

Second, to avoid repeating the same segments the authors define a coverage vector c^t at each time step t which estimates the amount of attention that each word w_i in the source has received from the decoder until time t:

This coverage vector is then used in two different places within the network. First it is used to inform the attention mechanism in charge of computing the attention weights a^t_i (in addition to the usual dependence on the encoder context vector h_i for the word w_i and the decoder state s_t. The decoder is thus aware of the words it has already been paying attention to. Second it is used to correct the loss function. Remember that at time step t the weight a^t_i is the attention put on word w_i while c^t_i is the attention this same word has received in the past. If the word w_i receives more attention at time t than it has already received in the past, that is if a^t_i > c^t_i, then the cost function should penalize large values of c^t_i and also the other way around. To penalize attention to repeated words one defines an additional term in the loss function at time step t as a sum over input tokens:

This is then added (with an additional hyperparameter) to the usual negative log likelihood of the target word w^*_t in the train set:

Results with and without these additional tricks are shown in figure 3.

Documents as sequences of contextualized sentences

Our next example illustrates recent ideas that defined a new SOTA for the extractive summary task. It builds directly upon a key idea that lead to the
BERT model in 2018, namely that of transfer learning based on a clever pretraining task for a Transformer encoder. Let’s go into a little more detail and summarize the HIBERT architecture for document summarization [4].

The basic observation is that extractive classification can be cast as a sentence tagging problem: simply train a model to identify which sentence in a document should be kept to make up summary! For this purpose the HIBERT architecture uses two nested encoder Transformers as illustrated in figure 4.

Figure 4: The HIBERT architecture involves a hierarchy of two Transformer encoders used to classify each sentence within a document as being part of the summary or not (source : author).

The first Transformer encoder at the bottom is a classic sentence encoder that transforms the sequence of words (w_0^k, w_1^k,…, w_j^k) that make up the kth sentence of the document to be summarized into a sentence embedding h_k. This vector is conventionally identified as the context vector above the end of sentence token <EOS>.

The second Transformer encoder which sits on the top is a document encoder that transforms the sequence of sentence embeddings (h_1, h_2,…, h_D) into a sequence of document aware sentence embeddings (d_1, d_2,…, d_D). These embeddings are in turn converted into a sequence of probabilities (p_1, p_2,…, p_D) where p_j is the probability that the jth sentence should be part of the summary.

Training such a complex hierarchical network from scratch is impractical because it would require an unrealistic amount of document-summary pairs. As is well known, the best strategy to train such a complex network with a limited amount of data is to use transfer learning. For this purpose the HIBERT architecture is first pretrained on an auxiliary task which consists in predicting sentences that are randomly masked (15% of them) within in a large corpus of documents:

An example of the masked sentence prediction task.

Figure 5 shows the architecture used for this masked sentence prediction task. It adds a Transformer decoder on top of the HIBERT architecture in order to convert the document aware sentence embedding d_k into the sequence of words (w_0^k, w_2^k,…, w_j^k) of the kth sentence which was masked. To generate the word at step i the decoder uses both its context h_i and the document aware sentence embedding d_k from the document encoder.

Figure 5: The architecture used for the masked sentence prediction task. A sentence Transformer decoder is added on top of the HIBERT architecture to recover the words of a masked sentence using the information encapsulated in its document aware embedding d_k (source : author).

Trained this way the network gathers a large amount of semantic knowledge without requiring any expansive labeling procedure. In a second stage, leveraging what it learned during the pretraining task, the network is fine-tuned on the actual target task, namely summarization as a sentence binary tagging task as in figure 4 describes.

This masked sentence prediction task is obviously reminiscent, on a sentence level, of the masked language model (MLM) used for pretraining the original BERT model. Remember that the MLM task consisted in recovering randomly masked words within sentences.

Reinforcement Learning comes to the rescue

As we explained earlier one central issue with the summarization task is the lack of a unique best summary. The ROUGE score takes this into account, up to some level, because it ignores the order of the words (or n-grams) within the generated summary. Therefore the cost function we would actually like to minimize should be something like this ROUGE score, or at least the final loss function should include such a term. This is the strategy that was followed in the last work [5] we present here which, again, concerns abstractive summarization.

The problem with a score like ROUGE is that for any sequence of words (w_1,…, w_j) generated by the decoder, it is constant with respect to the parameters theta of the network, thus making backpropagation impossible. The situation is not hopeless though because the expectation of the ROUGE score for sentences (w_1,…, w_j), sampled from the joint probability distribution p_theta(w_1,…, w_j) defined by the generator is indeed a differentiable function of those parameters! The way to go is clear then. Just minimize the loss L_RL defined by that expectation:

Actually we can view the generator of a Seq2Seq model as a Reinforcement Learning (RL) agent whose action at time step t is to generates a word w_t depending on an inner state s_t which encapsulates the history from previous actions. From here on we just need to open a book on RL [13] to learn how to minimize L_RL. A basic result in RL, known as the Policy Gradient Theorem, states that the gradient of L_RL:

where

and the last index j is that of the <EOS> token. The REINFORCE algorithm approximates the above expectation with a single sample (w_1,…, w_j) from the distribution p_theta(w_1,…, w_j) computed by the generator:

In practice scores like ROUGE can have a large variance which hinders convergence of the gradient descent. Fortunately, we can enhance the speed of convergence by comparing ROUGE(w_1,…, w_j) to a baseline b which is independent of (w_1,…, w_j). This does not change the gradient of L_RL as can readily be verified but it can considerably reduce the variance [13] and thus dramatically improve convergence:

The main point thus is to find an appropriate baseline. The idea in the work we are discussing [5] is to take the baseline b equal to the ROUGE score of the sequence of words the generator actually generates at inference time. Remember that this is the sequence of words that successively maximize the conditional probabilities as computed by the softmax of the decoder at each step t:

This choice for the baseline b is called self-critical sequence training (SCST). Altogether the reinforcement loss term thus reads:

where

This loss term as we can see prompts p_theta to generate word sequences (w_1,…,w_j) whose ROUGE score is larger than that of the sequence that was currently generated by the decoder.

There are two benefits for including such a SCST reinforcement learning term L_RL in the loss function. The first, which motivated the construction L_RL of in the first place, is that it makes it possible to use a non-differentiable score like ROUGE within a stochastic gradient descent training procedure. The second benefit is that it also cures the so called exposure bias. Exposure bias results from the classic teacher forcing procedure that is typically used to train a Seq2Seq model. This procedure trains the decoder RNN using the ground truth words (w*_1,…,w*_j) from the train set while at inference time the decoder must of course use its own generated tokens which could therefore result in an accumulation of errors. The SCST choice for the baseline b amounts to train the decoder using the distribution it will actually see at inference time.

The final loss function used is a weighted sum of the reinforcement learning loss L_RL and a standard maximum likelihood objective L_ML. The former takes into account the non-uniqueness of summaries, at least up to some point, but by itself it is certainly not an incentive for the model to produce readable messages. The latter on the other hand favors readable sentences as it is basically defines a language model.

In order to avoid repetitions, the authors also use an enhanced attention mechanism that involves a pointer network similar to the one we described in the first example [3].

What’s next?

The three models we described in the previous section all use Deep Learning and therefore implement a purely statistical approach to the summarization task. Recent research also tries to find better loss functions. Researchers at ReciTAL for instance explore the interesting idea that a good summary should answer questions as well as the original text allows [14]. On the whole, these models work indeed surprisingly well for short documents. But can we reasonably expect to build a system that could summarize a 300 pages novel in a page using techniques that only rely on crunching huge amounts of textual data? This is far from obvious. Abstract summarization should in principle be able to leverage real world knowledge to make sense of a document or a book to be summarized. It is unlikely though that language models alone, even when initialized with clever pretraining tasks, can ever capture such common sense which is more likely collected by sensory experience. One short term possibility for building useful summarization tools could be to narrow their scope down to specific areas of expertise where knowledge basis or ontologies are already available. A more radical step towards building system with better “real world understanding” could arise from multimodal learners designed to aggregate audio, video and text modalities, from movies from instance. Promising results have already been obtained along this path [15].

Acknowledgments

I would like here to thank Thomas Scialom, researcher at ReciTAL, who kindly share his knowledge with me by pointing my attention to his Summarizing Summarization page on GitHub [16]. This helped me kick start my exploration of Deep Learning summarization models.

References

  1. E. Sharma, C. Li, L. Wang, BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization (2019), arXiv:1906.03741.
  2. G. Lev, M. Shmueli-Scheuer, J. Herzig, A. Jerbi, D. Konopnicki, TalkSumm: A Dataset and Scalable Annotation Method for Scientific Paper Summarization Based on Conference Talks (2019), arXiv:1906.01351.
  3. A. See, Peter J. Liu, Ch. D. Manning, Get To The Point: Summarization with Pointer-Generator Networks (2017), arXiv:1704.04368.
  4. X. Zhang, F. Wei, M. Zhou, Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization (2019), arXiv:1905.06566.
  5. R. Paulus, C. Xiong, R. Socher, A Deep Reinforced Model for Abstractive Summarization (2017), arXiv:1705.04304.
  6. C. Lin, ROUGE A Package for Automatic Evaluation of Summaries.
  7. S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, V. Goel, Self-critical Sequence Training for Image Captioning (2016), arXiv:1612.00563.
  8. G. Genthial, Seq2Seq with Attention and Beam Search (2017), blog.
  9. C. Olah, Understanding LSTM Networks (2015), blog.
  10. J. Alammar, The Illustrated BERT, ELMo and Co., blog.
  11. J. Alammar, The Illustrated Transformer, blog.
  12. D. Bahdanau, K. Cho, Y. Bengio, Neural Machine Translation by Jointly Learning to Align and Translate (2016), arXiv:1409.0473.
  13. R. Sutton and A. Barto, Reinforcement Learning: An Introduction (2018), MIT Press, Cambridge, MA.
  14. T. Scialom, S. Lamprier, B. Piwowarski, J. Staiano, Answers Unite! Unsupervised Metrics for Reinforced Summarization Models (2019), arXiv:1909.01610.
  15. S. Palaskar, J. Libovický, S. Gella, F. Metze, Multimodal Abstractive Summarization for How2 Videos (2019), arXiv:1906.07901.
  16. T. Scialom, Summarizing Summarization (2019), GitHub.

--

--

I am currently scientific director at onepoint. My main interests are in Deep Learning, NLP and general Data Science. I have a PhD in theoretical physics.