The Alexa Meaning Representation Language - Association for ...

Page created by Sandra Castillo
 
CONTINUE READING
The Alexa Meaning Representation Language

    Thomas Kollar, Danielle Berry, Lauren Stuart, Karolina Owczarzak, Tagyoung Chung
                 {kollart, danijean, lsstuar, karowc, tagyoung}@amazon.com

            Lambert Mathias, Michael Kayser, Bradford Snow, Spyros Matsoukas
                    {mathiasl, mikayser, brsnow, matsouka}@amazon.com

                      Abstract                                  a taxi”. The second challenge is that a fixed flat
                                                                structure is unable to easily support certain fea-
     This paper introduces a meaning represen-
                                                                tures (Gupta et al., 2006b), such as cross-domain
     tation for spoken language understanding.
     The Alexa meaning representation language                  queries or complex utterances, which cannot be
     (AMRL), unlike previous approaches, which                  clearly categorized into a given domain. For ex-
     factor spoken utterances into domains, pro-                ample, “Find me a restaurant near the sharks
     vides a common representation for how peo-                 game” contains both local businesses and sporting
     ple communicate in spoken language. AMRL                   events and “Play hunger games and turn the lights
     is a rooted graph, links to a large-scale on-              down to 3” requires a representation that supports
     tology, supports cross-domain queries, fine-
                                                                assigning an utterance to two intents. The third
     grained types, complex utterances and com-
     position. A spoken language dataset has been               challenge is that there is no mechanism to repre-
     collected for Alexa, which contains ∼ 20k ex-              sent ambiguity, forcing the choice of a fixed in-
     amples across eight domains. A version of this             terpretation for ambiguous utterances. For exam-
     meaning representation was released to devel-              ple, “Play Hunger Games” could refer to an au-
     opers at a trade show in 2016.                             diobook, a movie, or a soundtrack. Finally, repre-
                                                                sentations are not reused between skills, leading to
1    Introduction
                                                                the need for each developer to create a custom data
Amazon has developed Alexa, a voice assistant                   and representations for their voice experiences.
that has been deployed across millions of de-                      In order to address these challenges and make
vices and processes voice requests in multiple lan-             Alexa more capable and accurate, we have devel-
guages. This paper addresses improvements to the                oped two key components. The first is the Alexa
Alexa voice service, whose core capabilities (as                ontology, a large hierarchical ontology that con-
measured by the number of supported intents and                 tains fine-grained types, properties, actions and
slots) has expanded more than four-fold over the                roles. Actions represent a predicate that deter-
last two years. In addition more than ten thousand              mines what the agent should do, roles express the
voice skills have been created by third-party devel-            arguments to an action, types categorize textual
opers using the Alexa Skills Kit (ASK). In order                mentions and properties are relations between type
to continue this expansion, new voice experiences               mentions. The second component is the Alexa
must be both accurate and capable of supporting                 Meaning Representation Language (AMRL), a
complex interactions.                                           graph-based domain and language independent
   However, as the number of features has ex-                   meaning representation that can capture the mean-
panded, adding new features has become increas-                 ing of spoken language utterances to intelligent as-
ingly difficult for four primary reasons. First, re-            sistants. AMRL is a rooted graph where action,
quests with a similar surface form may belong to                operators, relations and classes are labeled vertices
different domains, which makes it challenging to                and properties and roles are labeled edges. Un-
add features without degrading the accuracy of ex-              like typical representations for spoken language
isting domains. For example, similar linguistic                 understanding (SLU), which factors language un-
phrases such as “order me an echo dot” (e.g., for               derstanding into the prediction of intents (non-
Shopping) have a similar form to phrases used for               overlapping actions) and slots (e.g., named enti-
a ride-hailing feature such as, “Alexa, order me                ties) (Gupta et al., 2006a), our representation is

                                                          177
                                   Proceedings of NAACL-HLT 2018, pages 177–184
               New Orleans, Louisiana, June 1 - 6, 2018. c 2017 Association for Computational Linguistics
grounded in the Alexa ontology, which provides a                • Properties A given class contains a list of
common semantic representation for spoken lan-                    properties, which relate that class to other
guage understanding and can directly represent                    classes. Properties are defined in a hierar-
ambiguity, complex nested utterances and cross-                   chy, with finer-grained classes inheriting the
domain queries. Unlike similar meaning repre-                     properties of its parent. There are range re-
sentations such as AMR (Banarescu et al., 2013),                  strictions on the available types for both the
AMRL is designed to be cross-lingual, explicitly                  domain and range of the property.
represent fine-grained entity types, logical state-             • Actions A hierarchy of actions are defined as
ments, spatial prepositions and relationships and                 classes within the ontology. ACTIONS cover
support type mentions. Examples of AMRL and                       the core functionality of Alexa.
the SLU representations can be seen in Figure 1.                • Roles ACTIONS operate on entities via roles.
  The AMRL has been released via Alexa Skills                     The most common role for an ACTION is the
Kit (ASK) built-in intents and slots in 2016 at a                 .object role, which is defined to be the entity
developers conference, offering coverage for eight                on which the ACTION operates.
of the ∼20 SLU domains 1 . In addition to these                 • Operators and Relations A hierarchy of op-
domains, we have demonstrated that the AMRL                       erators and relations represent complex rela-
can cover a wide range of additional utterances                   tionships that cannot be expressed easily as
by annotating a sample from all first and third-                  properties. Represented as classes, these in-
party applications. We have manually annotated                    clude ComparativeOperator, Equals and Co-
data for 20k examples using the Alexa ontology.                   ordinator (Figure 2).
This data includes the annotation of ∼100 actions,             The Alexa ontology utilized schema.org as its
∼500 types, ∼20 roles and ∼172 properties.                   base and has been updated to include support for
                                                             spoken language. In addition, using schema.org
2       Approach                                             as the base of the Alexa Ontology means that it
This paper describes a common representation for             shares a vocabulary used by more than 10 million
SLU, consisting of two primary components:                   websites, which can be linked to the Alexa ontol-
  • The Alexa ontology - A large-scale hierarchi-            ogy.
     cal ontology developed to cover all spoken
     language usage.                                         2.2   Alexa meaning representation language
  • The Alexa meaning representation language                AMRL leverages classes, properties, actions, roles
     (AMRL) - A rooted graph that provides a                 and operators in the main ontology to create a
     common semantic representation, is compo-               compositional, graph-based representation of the
     sitional and can support complex user re-               meaning of an utterance. The graph-based rep-
     quests.                                                 resentation conceptualizes each arc as a property
These two components are described in the fol-               and each node as an instance of a type; each type
lowing sections.                                             can have multiple parents. Conventions have been
                                                             developed to annotate the AMRL for an utterance
2.1       The Alexa ontology
                                                             accurately and consistently. These conventions fo-
The Alexa ontology provides a common semantics               cus primarily on linguistic annotation, and only
for SLU. The Alexa ontology is developed in RDF              consider filled pauses, edits, and repairs in limited
and consists of five primary components:                     contexts. The conventions include:
   • Classes A hierarchy of Classes, also re-                   • Fine-grained type mentions When an entity
     ferred to as types, is defined in the ontology.              type appears in an utterance, the most fine-
     This hierarchy is a rooted tree, with finer-                 grained type will be annotated. For “turn on
     grained types at deeper levels. Coarse types                 the light”, the mention ‘light’ could be an-
     that are children of T HING include P ERSON,                 notated as a D EVICE. However, there is a
     P LACE, I NTANGIBLE, ACTION, P RODUCT,                       more appropriate finer-grained type, L IGHT-
     C REATIVE W ORK, E VENT and O RGANIZA -                      ING which will be selected instead.
     TION . Fine-grained types include M USI -
                                                                • Ambiguous type mentions When more than
     C R ECORDING and R ESTAURANT.
                                                                  one fine-grained type is possible, then the
    1
        https://amzn.to/2qDjNcJ                                   annotator will utilize a more coarse-grained

                                                       178
“turn on the song thriller by michael jackson”
  D   MusicApp
  I   ListenMediaIntent
  S   turn on the [song]SongType [thriller]Song by [michael jackson]Singer

                    “turn on the living room lights”
  D   HomeAutomation
  I   ActivateIntent
  S   turn on the [living room]Location [lights]Device

                       (a) SLU Example 1                                                    (b) AMRL Example 2

Figure 1: This figure shows the SLU representation on the left and the AMRL representation on the right.
The three components of the SLU representation, domain (D), intent (I) and slots (S) are shown. The
intent is different (e.g., “ListenMediaIntent” vs. “ActivateIntent”), despite the presence of“turn on”. On
the right are the same utterances represented in the AMRL. The nodes represent the instances of classes
defined in an ontology, while the directed arcs connecting the class instances are properties. The root
node of both graphs is the action, ACTIVATE ACTION is shared across these two utterances, providing the
domain-less annotation with a uniform representation for the same carrier phrase. “-0” indicates the first
mention of a type in the utterance, and can be used used to denote co-reference across multiple dialog
turns.

    type in the hierarchy. This type should be the                                      eliminate cycles in the graph.
    finest-grained type that still captures the am-
    biguity. For example, in the utterance “play                                      A few of these properties have special meaning
    thriller’, “thriller” can either be a M USIC A L -                             at annotation time. Specifically, for the annotation
    BUM or a M USIC R ECORDING . Instead of                                        of textual mentions there exist three primary prop-
    selecting one of these a more coarse-grained                                   erties: .name, .value and .type. The conventions
    type of M USIC C REATIVE W ORK will be cho-                                    for these properties are as follows:
    sen. When the ambiguity would force fall-
    back to the root class of the ontology T HING,                                   • .name This is a nominal mention in the ut-
    AMRL annotation chooses a sub-class and                                            terance, the .name property links the text to
    marks the usage of it as uncertain.                                                an instance of a class. .name is only used
  • Properties Properties are annotated when                                           for mentions that are not a numeric quantity
    they are unambiguous. For example, “find                                           or enumeration. An example of .name for a
    books by truman capote”, the use of the .au-                                       M USICIAN class would be “madonna”.
    thor property on the B OOK class is unam-                                        • .value This is defined in the same way as
    biguous. Similarly, for “find books about tru-                                     .name but is used for mentions that are nu-
    man capote” the use of the .about property                                         meric quantities or enumerations. For in-
    on the B OOK class is unambiguous.                                                 stance, “two” would be a .value of an I N -
  • Ambiguous property usage When there is                                             TEGER class.
    uncertainty in the property that should be se-                                   • .type This is a generic mention of an entity
    lected for the representation, the annotator                                       type. For example, “musician” is a .type
    may fall back to a more generic property.                                          mention of the M USICIAN class.
  • Property inverses When a property can
    be annotated in two different directions, a                                       One action (N ULL ACTION) has a special mean-
    canonical property is defined in the ontol-                                    ing. This is annotated whenever a SLU query does
    ogy and used for all annotations. For ex-                                      not have an associated action or the action is un-
    ample, .parentOrganization has an inverse                                      clear. This happens, for example, when someone
    of .subOrganization. The former is selected                                    says, “temperature”. In contrast, “show me the
    as canonical for annotation flexibility and to                                 temperature” is annotated with the more specific
                                                                                   D ISPLAYACTION.

                                                                             179
2.3     Expanded Language Support                            include:
AMRL has been used to represent utterances that                 • Object-level conjunction: “Add milk, bread,
are either not supported or challenging to support                and eggs to my shopping list”
using standard SLU representations. The follow-                 • Clause-level conjunction: “Restart this song
ing section describes support for anaphora, com-                  and turn the volume up to seven”
plex and cross-domain utterances, referring ex-              Conjunctions and disjunctions are represented us-
pressions for locations and composition.                     ing a Coordinator class. The “.value” property de-
                                                             fined which logical operation is to be performed.
2.3.1    Anaphora                                            Examples of the AMRL representation for these is
AMRL can natively support pronominal anaphora                shown in Figure 2b and 2c.
resolution both within the same utterance or across
                                                             2.3.5   Conditional statements
utterances. For example:
   • Within utterance: “Find the highest-rated               Conditional statements are not usually represented
      toaster and show me its reviews’’                      in other formalisms. An example of a condi-
   • Across utterances: “What is Madonna’s lat-              tional statement is, “when its raining, turn off the
      est album” “Play it.”                                  sprinklers”. Time-based conditional statements
   Terminal nodes refer back to the same (unique)            are special cased due to their frequency in spo-
entity. An example annotation across multiple ut-            ken language. For time-based expressions (e.g.,
terances can be seen in Figures 3a and 3b. Sim-              “when it is three p.m., turn on the lights”), a start-
ilar to the above, it can handle bridges within dis-         Time (or endTime) property is used on the action
course, such as, “find me an italian restaurant” and         to denote the condition of when the action should
“what’s on its menu.”                                        start (or stop). For all other expressions, we use
                                                             the ConditionalOperator, which has a “condition”
2.3.2    Inferred nodes                                      property as well as a “result” property. When the
AMRL contains nodes that are not grounded in                 condition is true, then the result would apply. The
the text. For example, for the utterance, in Fig-            constrained properties are defining the arguments
ure 2a there are two inferred nodes, one for the             of the Equals operator. An example can be seen in
address of the restaurant and another for the ad-            Figure 4. A deterministic transformation from the
dress of the sports event. Not explicitly represent-         simplified time-based scheme to ConditionalOper-
ing types has two primary benefits. First, certain           ator form when greater consistency is desired.
linguistic phenomena such as anaphora are easier
                                                             2.3.6   Referring expressions for locations
to support. Second, the representation is aligned to
the ontology, which enables direct queries against           AMRL can represent locations and their relation-
the knowledge base. Inferred nodes are the AMRL              ships. For simpler expressions that are common,
way to perform reification.                                  such as “on” or “in,” properties are used to repre-
                                                             sent the relationship between two entity mentions.
2.3.3    Cross-domain utterances                             For other spatial relations, such as “between” or
Using the common semantics of AMRL means                     “around,” an operator is introduced. Two exam-
that parses do not need to obey domain bound-                ples of spatial relationships can be seen in Fig-
aries. For example, these utterances would be-               ure 2d. In this example “beside” grounds to the
long to two domains (e.g., sports and local search):         relation being used (e.g., “beside”) and uses two
“Where is the nearest restaurant” and “What is               properties (e.g., constrained and target), which are
happening at the Sharks game”. AMRL, as in                   the the first and second arguments to the spatial
Figure 2a, can handle utterances that span multi-            preposition.
ple domains, such as the one shown in Figure 2a.
                                                             2.3.7   Composition
2.3.4    Conjunctions, disjunctions and                      AMRL supports composition, which enables reuse
         negations                                           of types and subgraphs to represent utterances
AMRL can cover logical expressions, where there              with similar meanings. For example, Figures 2e
can be an arbitrary combination of conjunctions,             and 2f show the ability to create significantly dif-
disjunctions, or conditional statements. Some ex-            ferent actions only by changing the type of the ob-
amples of object-level or clause-level conjunctions          ject of the utterance. Such substitution can occur

                                                       180
(a) Cross-domain                              (b) Conjunction                      (c) Conjunction (multiple actions)

                     (d) Spatial relations                               (e) Composition (1)                 (f) Composition (2)

Figure 2: Examples of complex queries. In (a) is the utterance “find restaurants near the sharks game.”.
In (b) is the utterance “find red and silver toasters”. In (c) is “play charlie brown and turn the volume
up to 10”. In (d) is “find the wendy’s on 5th avenue beside the park.” In (e) and (f) are an illustration
composition for, “play girl from the north country” and “play blue velvet.”.

                                                                     3     Dataset
                                                                     Data has been collected for the AMRL across
                                                                     many spoken language use-cases. The current do-
                                                                     mains that are supported include music, books,
                                                                     video, local search, weather and calendar. We
                                                                     have prototyped mechanisms to speed up anno-
        (a) Turn 1                           (b) Turn 2              tation via paraphrasing (Berant and Liang, 2014)
                                                                     and conversion from our current SLU represen-
Figure 3: (a) shows the first turn of a conversation,                tation, in order to leverage the much larger data
“play songs by madonna” (b) shows the second                         available. The primary mechanism we have for
turn of a conversation, “what’s her address”. Be-                    data-acquisition is via manual annotation. Tools
cause the node S INGER -0 has the same “-0” ID in                    have been developed in order to acquire the full
both turns, the previous turn can be directly used                   graph annotated with all the properties, classes,
to infer that the address should be for the person                   actions and operators.
whose name is “Madonna.”                                                AMRL manual annotation is performed by data
                                                                     annotators in four stages. In the first stage an ac-
                                                                     tion is selected, for example ACTIVATE ACTION
anywhere in the annotation graph. PlaybackAc-
                                                                     in Figure 1b. The second stage defines the text
tion is used to denote playing of the entity referred
                                                                     spans in an utterance that link to a class in the
to by the object role.
                                                                     ontology (e.g., “michael jackson” is a Musician
                                                                     type and “thriller” and “song” are MusicRecord-
2.3.8    Unsupported features
                                                                     ing types, the first is a .name mention, while the
Although many linguistic phenomena can be sup-                       latter is a .type mention. The third stage creates
ported in AMRL, there are a few that have not                        connections between the classes and defines any
been explicitly supported and are left for future                    missing nodes in the graph. In the final stage a
work. These include existential and universal                        skilled annotator reviews the graph for mistakes
quantification and scoping and conventions for                       and and re-annotates it if necessary. There is a
agency (most requests are imperative). In addi-                      visualization of the semantic annotation available,
tion, there is currently no easy way to convert to                   enabling an annotator to verify that they have built
first order logic (e.g., lambda calculus), due to                    the graph in a semantically accurate manner. Man-
conventions that simplify annotation, but lose in-                   ual annotation happens at the rate of 40 per hour.
formation about operators such as spatial relation-                  The manually annotated dataset contains ∼20k an-
ships.                                                               notated utterances and contains 93 unique actions,

                                                               181
(a) AMRL for “when it is raining, turn off the sprin-                     (b) AMRL for “when it is three p.m., turn on the
klers”                                                                    lights.”

Figure 4: Two examples of conditional statements. In (b) are the annotation for time-based conditions,
while in (a) is a non-time based trigger.

448 types, 172 properties and 23 roles.                       Baldridge, 2011) (Hockenmaier and Steedman,
                                                              2007), universal dependencies (Nivre et al., 2016)
4    Parsing                                                  are all related representations. A comparison of
Any graph parsing method can be used to predict               semantic representations for natural language se-
AMRL given a natural language utterance. One                  mantics is described in Abend and Rappoport. Un-
approach is to use hyperedge replacement gram-                like these meaning representations for written lan-
mars (Chiang et al., 2013) (Peng et al., 2015),               guage, AMRL covers question answering, imper-
though these require large datasets in order to train         ative actions, and a wide range of new types and
accurate parsers. Alternatively, the graph can be             properties (e.g., smart home, timers, etc.).
linearized, as in (Gildea et al., 2017) and sequence             AMR and AMRL are both rooted, directed,
to sequence or sequential models can be used to               leaf-labeled and edge-labeled graphs. AMRL
predict AMRL (Perera and Strubell, 2018). We                  does not reuse PropBank frame arguments, cov-
have shown that AMRL full-parse accuracy is at                ers predicate-argument relations, including a wide
78%, though the serialization, use of embeddings              variety of semantic roles, modifiers, co-reference,
from related tasks can improve parser accuracy.               named entities and time expressions (Banarescu
More details can be found in (Perera and Strubell,            et al., 2013). There are more than 1000 named-
2018).                                                        entity types in AMRL (AMR has around 80). Re-
                                                              entrancy is not used in AMRL notation. In ad-
5    Related Work                                             dition to the AMR “name” property, AMRL con-
                                                              tains a “type” property for mentions of a type
FreeBase (Bollacker et al., 2008) (now WikiData)
                                                              (or class) and a “value” property for the men-
and schema.org (Guha et al., 2016) are two com-
                                                              tion of numeric values. Anaphora is handled
mon ontologies. Schema.org is widely used on
                                                              in AMRL for spoken dialog Poesio and Art-
the web and contains actions, types and proper-
                                                              stein (Gross et al., 1993). Unlike representations
ties. The Alexa ontology expands schema.org to
                                                              used for spoken language understanding (SLU)
cover types, properties and roles used in spoken
                                                              (Gupta et al., 2006b), AMRL represents both en-
language.
                                                              tity spans, complex natural language expressions,
   Semantic parsing has been investigated in the
                                                              and fine-grained named-entity types.
content of small domain-specific datasets such as
GeoQuery (Wong and Mooney, 2006) and in the                   6       Conclusions and Future Work
context of larger broad-coverage representations
such as the Groningen Meaning Bank (GMB) (Bos                 This paper develops AMRL, a meaning represen-
et al., 2017), the Abstract Meaning Representation            tation for spoken language. We have shown how it
(AMR) (Banarescu et al., 2013), UCCA (Abend                   can be used to expand the set of supported use-
and Rappoport, 2013), PropBank (Kingsbury                     cases to complex and cross-domain utterances,
and Palmer, 2002), Raiment (Baker et al.,                     while leveraging a single compositional seman-
1998) and lambda-DCS (Kingsbury and Palmer,                   tics. The representation has been released at AWS
2002). OntoNotes (Hovy et al., 2006), lambda-                 Re:Invent 2016 2 . It is also being used as a rep-
DCS s (Liang, 2013) (Baker et al., 1998),                     resentation for expanded support for complex ut-
FrameNet (Baker et al., 1998), combinatory                    terances, such as those with sequential composi-
categorial grammars (CCG) (Steedman and                           2
                                                                      https://amzn.to/2qDjNcJ

                                                        182
tion. Continued development of a common mean-                   Derek Gross, James Allen, and David Traum. 1993.
ing representation for spoken language will enable                The trains 91 dialogues .
Alexa to become capable and accurate, expanding
                                                                R. V. Guha, Dan Brickley, and Steve Macbeth. 2016.
the set of functionality for all Alexa users.                      Schema.org: Evolution of structured data on the
                                                                   web. Commun. ACM 59(2):44–51. https://
                                                                   doi.org/10.1145/2844544.
References
                                                                N. Gupta, G. Tur, D. Hakkani-Tur, S. Bangalore,
Omri Abend and Ari Rappoport. 2013. Universal con-                G. Riccardi, and M. Gilbert. 2006a. The at&t spo-
 ceptual cognitive annotation (ucca). In ACL (1).                 ken language understanding system. IEEE Trans-
 pages 228–238.                                                   actions on Audio, Speech, and Language Process-
                                                                  ing 14(1):213–222. https://doi.org/10.
Omri Abend and Ari Rappoport. 2017. The state of the              1109/TSA.2005.854085.
 art in semantic representation. In Proceedings of the
 Association for Computational Linguistics. Vancou-             Narendra Gupta, Gokhan Tur, Dilek Hakkani-Tur,
 ver,CA.                                                          Srinivas Bangalore, Giuseppe Riccardi, and Mazin
                                                                  Gilbert. 2006b. The AT&T spoken language under-
Collin F. Baker, Charles J. Fillmore, and John B. Lowe.           standing system. Audio, Speech, and Language Pro-
  1998. The berkeley framenet project. In Proceed-                cessing, IEEE Transactions on 14(1):213–222.
  ings of the 36th Annual Meeting of the Associa-
  tion for Computational Linguistics and 17th Inter-
                                                                Julia Hockenmaier and Mark Steedman. 2007. Ccg-
  national Conference on Computational Linguistics -
                                                                   bank: A corpus of ccg derivations and dependency
  Volume 1. Association for Computational Linguis-
                                                                   structures extracted from the penn treebank. Com-
  tics, Stroudsburg, PA, USA, ACL ’98, pages 86–
                                                                   put. Linguist. 33(3):355–396. https://doi.
  90. https://doi.org/10.3115/980845.
                                                                   org/10.1162/coli.2007.33.3.355.
  980860.

Laura Banarescu, Claire Bonial, Shu Cai, Madalina               Eduard Hovy, Mitchell Marcus, Martha Palmer,
  Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin                  Lance Ramshaw, and Ralph Weischedel. 2006.
  Knight, Philipp Koehn, Martha Palmer, and Nathan                Ontonotes: The 90% solution. In Proceedings of
  Schneider. 2013. Abstract meaning representation                the Human Language Technology Conference of the
  for sembanking. In Proceedings of the 7th Linguis-              NAACL, Companion Volume: Short Papers. As-
  tic Annotation Workshop and Interoperability with               sociation for Computational Linguistics, Strouds-
  Discourse. Association for Computational Linguis-               burg, PA, USA, NAACL-Short ’06, pages 57–
  tics, Sofia, Bulgaria, pages 178–186. http://                   60. http://dl.acm.org/citation.cfm?
  www.aclweb.org/anthology/W13-2322.                              id=1614049.1614064.

Jonathan Berant and Percy Liang. 2014.       Semantic           Paul Kingsbury and Martha Palmer. 2002. From tree-
  parsing via paraphrasing.                                       bank to propbank. In LREC. pages 1989–1993.

Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim              Percy Liang. 2013. Lambda dependency-based com-
  Sturge, and Jamie Taylor. 2008. Freebase: A collab-             positional semantics. CoRR abs/1309.4408. http:
  oratively created graph database for structuring hu-            //arxiv.org/abs/1309.4408.
  man knowledge. In Proceedings of the 2008 ACM
  SIGMOD International Conference on Management                 Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin-
  of Data. ACM, New York, NY, USA, SIGMOD                         ter, Yoav Goldberg, Jan Hajic, Christopher D. Man-
  ’08, pages 1247–1250. https://doi.org/10.                       ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo,
  1145/1376616.1376746.                                           Natalia Silveira, Reut Tsarfaty, and Daniel Zeman.
                                                                  2016. Universal dependencies v1: A multilingual
Johan Bos, Valerio Basile, Kilian Evang, Noortje Ven-             treebank collection. In Nicoletta Calzolari (Con-
  huizen, and Johannes Bjerva. 2017. The gronin-                  ference Chair), Khalid Choukri, Thierry Declerck,
  gen meaning bank. In Nancy Ide and James Puste-                 Sara Goggi, Marko Grobelnik, Bente Maegaard,
  jovsky, editors, Handbook of Linguistic Annotation,             Joseph Mariani, Helene Mazo, Asuncion Moreno,
  Springer, volume 2, pages 463–496.                              Jan Odijk, and Stelios Piperidis, editors, Proceed-
                                                                  ings of the Tenth International Conference on Lan-
David Chiang, Jacob Andreas, Daniel Bauer,                        guage Resources and Evaluation (LREC 2016). Eu-
  Karl Moritz Hermann, Bevan Jones, and Kevin                     ropean Language Resources Association (ELRA),
  Knight. 2013.   Parsing graphs with hyperedge                   Paris, France.
  replacement grammars. In ACL (1). pages 924–932.
                                                                Xiaochang Peng, Linfeng Song, and Daniel Gildea.
Daniel Gildea, Nianwen Xue, Xiaochang Peng, and                   2015. A synchronous hyperedge replacement gram-
  Chuan Wang. 2017. Addressing the data sparsity is-              mar based approach for amr parsing. In CoNLL.
  sue in neural amr parsing. In EACL.                             pages 32–41.

                                                          183
Vittorio Chung Tagyoung Kollar Thomas Perera and                 181–224.   https://doi.org/10.1002/
   Emma Strubell. 2018.      Multi-task learning for             9781444395037.ch5.
   parsing the alexa meaning representation language.
   In American Association for Artificial Intelligence
   (AAAI).                                                     Yuk Wah Wong and Raymond J Mooney. 2006. Learn-
                                                                  ing for semantic parsing with statistical machine
Massimo Poesio and Ron Artstein. 2008. Anaphoric                  translation. In Proceedings of the main confer-
 annotation in the arrau corpus. In LREC.                         ence on Human Language Technology Conference
                                                                  of the North American Chapter of the Association of
Mark Steedman and Jason Baldridge. 2011. Combina-                 Computational Linguistics. Association for Compu-
 tory Categorial Grammar, Wiley-Blackwell, pages               tational Linguistics, pages 439–446.

                                                         184
You can also read