Meaning Representation and SRL​: assuming there is some meaning

A detailed description of meaning representation and its need

Vivek
Towards Data Science

--

What is meaning Representation

A meaning representation can be understood as a bridge between subtle linguistic nuances and our common-sense non-linguistic knowledge about the world. It can be seen as a formal structure capturing the meaning of linguistic input. While doing this, we assume that any given linguistic structure has some stuff/information that can be used to express the state of the world.

How do we realize that someone has praised or insulted us? That’s where the need for meaning representation arises. We understand this by breaking the linguistic input into meaningful structure and linking that to our knowledge of the real world. In this case, it can be our knowledge about a specific person, our knowledge about previous experiences and relationship with that person, our knowledge of that particular moment and many more.

How to represent the meaning of a sentence

There are four commonly used meaning representation languages:

  • First Order Logic
  • Abstract Meaning Representation using a directed graph
  • Abstract Meaning Representation using the textual form
  • Frame-Based or Slot filter representation

In the below image, four approached are represented for the sentence, “I have a Phone”.

There isn’t a lot of differences between these approaches. They all share the same thought that meaning representation consists of the structure corresponding to the object, properties fo object and the relation between them.

SRL — Why one more representation to learn and understand?

Similar events can be expressed in a variety of structure using a medley of sentences. Representations like semantic role labeling help us capture the common structure shared across varied sentences expressing the same thought. It helps us identify and extract the event and the participants among various ways we describe it. We do also have other representations like Deep Role and Thematic Role but they have their limitations. The deep role is very specific to an event whereas Thematic role captures semantic commonality between actors of Deep Role. Although, the thematic role is one of the oldest semantic concepts but agreed rules are either very abstract level, i.e, very few rules — representing only a few high-level ideas or are at very low-level, i.e, a large number of rules — representing a specific event in detail. Semantic rules can be seen as a way to represent any linguistic structure in a structured representation both at high as well as low level.

SRL or as we know it, Semantic Role Labeling helps us understand the semantic relationship between a predicate and its arguments. It helps us answer questions such as, Who did What to Whom also Where and When. Semantic Role Labeling is the task of automatically finding the semantic role for each argument for each predicate is a sentence.

Semantic Role Labeling — advancements over time

Oldest work on the semantic relationship can be traced back to sometime around 8th century BCE. As.t.a ̄dhya ̄y ̄ı — a collection of 8 books describing the linguistic structure of Sanskrit language in 3959 sutras — a rule system similar to the mechanism of modern formal language theory. As.t.a ̄dhya ̄y ̄ı also has a set of rules describing the semantic relationship between a verb and noun arguments, answering the questions like, Who did What to Whom also Where and When. This is the oldest known work on semantic representation between the event and its participants.

Getting the features to train SRL models wouldn’t have been possible without the development of various linguistic resources such as, modern formulation of thematic rule by Fillmore(1968) and Gruber(1965), list of 3100 English verbs and corresponding semantic classes in form of Levin-List (1993), linking of Levin-List to both WordNet and FrameNet by Kippler et al(2000), a large corpus of syntactically annotated English language data as Penn TreeBank(1993 and later), corpus of sentence annotated with semantic roles as PropBank(semantically annotated Penn treebank — in English language), and FrameNet — a defined set of frame-specific semantic roles called frame element including a set of predicates that uses these predicates.

Most of the current approaches of SRL are based on supervised feature-based machine learning algorithms. Following pseudo — code gives more insight on feature-based SRL model:

def SemanticRoleLabel(words):
semantic_role_dict = {}
## parsing the words to get maximum coverage
parsed_words = Parse(words)
for each predicate in parsed_words:
for each node in parsed_words:
#a feature extraction function to extract needed features
node_feature = ExtractFeature(node, predicate, parse)
#a 1 of N class classifier to get semantic role
semantic_role = Classify_Node(node, parse, featurevector)

semantic_role_dict[node] = semantic_role
return semantic_role_dict

Most SRL systems are built on features suggested by, Automatic labeling of semantic roles(Gilda and Jurafsky, 2000), it includes features such as: governing predicate, phrase type, headword of the constituent, the path in the parse tree from the constituent to the predicate, etc. Researchers have also spend a lot of time suggesting the best classification methods and optimization techniques.

SRL post third wave of neural networks

Most of the current state of the art deep-models for semantic role labeling is based on processing BIO tagged input data for arguments and pertained embedding layer using bi-directional LSTM network. The depth of the network, in general, is around 6 to 8 LSTM layers. is a network architect of SRL model proposed by He et at. (2017) in his work, “Deep Semantic Role Labeling: What Works and What’s Next”.

The goal is to maximize the probaility of tag-sequence y -- given the input sequence of words.
y^ = argmaxP(y|w)y∈T

Other recent works after this have also introduced some great ideas and improved the model accuracy further and it looks like there will be many more iterations of these networks in the near future. Tan et al. introduced self-attention to the LSTMs based network to obtain better accuracy and better global optimization, He et al. added syntactical information as another input point to the network to get a more detailed representation of the input data. Most recent work from Strubell et al. has incorporated both multi-head self-attention and syntax information to achieve state of the art results. There model, LISA — a neural network model that combines multi-head self-attention with multitask learning incorporate syntax using merely raw tokens as input, encoding the sequence only once to simultaneously perform parsing, predicate detection and role labeling for all predicates. The syntax is incorporated by training one attention head to attend to syntactic parents for each token.

Finally, Semantic Role Labeling is a powerful method bridging the gap between human language and computational understanding of it. And with fast pace advancement of neural-network and rapidly growing adaptation of voice-activated-assistants, SRL is only going to become a more important area of study.

--

--

Worried about real-world event representation. Working on common-sense-reasoning and natural language understanding