Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine

Page created by Tommy Thornton
 
CONTINUE READING
Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine
Language in a (Search) Box:
        Grounding Language Learning in Real-World Human-Machine
                              Interaction
         Federico Bianchi∗                         Ciro Greco                        Jacopo Tagliabue
        Bocconi University                         Coveo Labs                           Coveo Labs
           Milano, Italy                        New York, USA                         New York, USA
     f.bianchi@unibocconi.it                   cgreco@coveo.com                   jtagliabue@coveo.com

                       Abstract                              that language may be learned based on its usage and
                                                             that learners draw part of their generalizations from
    We investigate grounded language learning                the observation of teachers’ behaviour (Tomasello,
    through real-world data, by modelling a                  2003). These ideas have been recently explored
    teacher-learner dynamics through the natu-
                                                             by work in grounded language learning, showing
    ral interactions occurring between users and
    search engines; in particular, we explore the            that allowing artificial agents to access human ac-
    emergence of semantic generalization from un-            tions providing information on language meaning
    supervised dense representations outside of              has several practical and scientific advantages (Yu
    synthetic environments. A grounding domain,              et al., 2018; Chevalier-Boisvert et al., 2019).
    a denotation function and a composition func-               While most of the work in this area uses toy
    tion are learned from user data only. We show            worlds and synthetic linguistic data, we explore
    how the resulting semantics for noun phrases
                                                             grounded language learning offering an example
    exhibits compositional properties while be-
    ing fully learnable without any explicit la-             in which unsupervised learning is combined with a
    belling. We benchmark our grounded seman-                language-independent grounding domain in a real-
    tics on compositionality and zero-shot infer-            world scenario. In particular, we propose to use the
    ence tasks, and we show that it provides better          interaction of users with a search engine as a setting
    results and better generalizations than SOTA             for grounded language learning. In our setting,
    non-grounded models, such as word2vec and                users produce search queries to find products on
    BERT.                                                    the web: queries and clicks on search results are
                                                             used as a model for the teacher-learner dynamics.
1   Introduction
                                                                We summarize the contributions of our work as
Most SOTA models in NLP are only intra-                      follows:
textual. Models based on distributional seman-                  1. we provide a grounding domain composed of
tics – such as standard and contextual word em-                    dense representations of extra-linguistic enti-
beddings (Mikolov et al., 2013; Peters et al., 2018;               ties constructed in an unsupervised fashion
Devlin et al., 2019) – learn representations of word               from user data collected in the real world.
meaning from patterns of co-occurrence in big cor-                 In particular, we learn neural representa-
pora, with no reference to extra-linguistic entities.              tions for our domain of objects leveraging
   While successful in a range of cases, this ap-                  prod2vec (Grbovic et al., 2015): crucially,
proach does not take into consideration two fun-                   building the grounding domain does not re-
damental facts about language. The first is that                   quire any linguistic input and it is indepen-
language is a referential device used to refer to                  dently justified in the target domain (Tagli-
extra-linguistic objects. Scholarly work in psy-                   abue et al., 2020a). In this setting, lexical
cholinguistics (Xu and Tenenbaum, 2000), formal                    denotation can also be learned without ex-
semantics (Chierchia and McConnell-Ginet, 2000)                    plicit labelling, as we use the natural inter-
and philosophy of language (Quine, 1960) show                      actions between the users and the search en-
that (at least some aspects of) linguistic meaning                 gine to learn a noisy denotation for the lexi-
can be represented as a sort of mapping between lin-               con (Bianchi et al., 2021). More specifically,
guistic and extra-linguistic entities. The second is               we use DeepSets (Cotter et al., 2018) con-
     ∗
     Corresponding author. Authors contributed equally and         structed from user behavioural signals as the
are listed alphabetically.                                         extra-linguistic reference of words. For in-
                                                         4409
                         Proceedings of the 2021 Conference of the North American Chapter of the
               Association for Computational Linguistics: Human Language Technologies, pages 4409–4415
                            June 6–11, 2021. ©2021 Association for Computational Linguistics
Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine
stance, the denotation of the word “shoes” is      daunting task of providing a general account of the
     constructed from the clicks produced by real       semantics of a natural language. The chosen IR do-
     users on products that are in fact shoes after     main is rich enough to provide a wealth of data and
     having performed the query “shoes” in the          possibly to see practical applications, whereas at
     search bar. Albeit domain specific, the result-    the same time it is sufficiently self-contained to be
     ing language is significantly richer than lan-     realistically mastered without human supervision.
     guages from agent-based models of language
     acquisition (Słowik et al., 2020; Fitzgerald       2    Methods
     and Tagliabue, 2020), as it is based on 26k
     entities from the inventory of a real website.      Following our informal exposition in Section 1, we
                                                         distinguish three components, which are learned
   2. We show that a dense domain built through          separately in a sequence: learning a language-
       unsupervised representations can support com- independent grounding domain, learning noisy de-
       positionality. By replacing a discrete for- notation from search logs and finally learning func-
       mal semantics of noun phrases (Heim and           tional composition. While only the first model
       Kratzer, 1998) with functions learned over        (prod2vec) is completely unsupervised, it is im-
       DeepSets, we test the generalization capa- portant to remember that the other learning proce-
       bility of the model on zero-shot inference: dures are only weakly supervised, as the labelling
       once we have learned the meaning of “Nike         is obtained by exploiting an existing user-machine
       shoes”, we can reliably predict the meaning       dynamics to provide noisy labels (i.e. no human
       of “Adidas shorts”. In this respect, this work    labeling was necessary at any stage of the training
       represents a major departure from previous        process).
       work on the topic, where compositional behav-        Learning a representation space. We train
       ior is achieved through either discrete struc- product representation to provide a “dense ontol-
       tures built manually (Lu et al., 2018; Krishna    ogy” for the (small) world we want our language
       et al., 2016), or embeddings of such struc- to describe. Those representations are known in
       tures (Hamilton et al., 2018).                    product search as product embeddings (Grbovic
                                                         et al., 2015): prod2vec models are word2vec mod-
   3. To the best of our knowledge, no dataset of
                                                         els in which words in a sentence are replaced by
       this kind (product embeddings from shop-
                                                         products in a shopping session. For this study, we
       ping sessions and query-level data) is publicly
                                                         pick CBOW (Mu et al., 2018) as our training algo-
       available. As part of this project, we release
                                                         rithm, and select d = 24 as vector size, optimizing
       our code and a curated dataset, to broaden
                                                         hyperparameters as recommended by Bianchi et al.
       the scope of what researchers can do on the
                                                         (2020);   similar to what happens with word2vec,
       topic1 .
                                                         related products (e.g. two pairs of sneakers) end up
   Methodologically, our work draws inspiration          closer in the embedding space. In the overall pic-
from research at the intersection between Artificial     ture, the product space just constitutes a grounding
Intelligence and Cognitive Sciences: as pointed          domain, and re-using tried and tested (Tagliabue
out in recent papers (Bisk et al., 2020; Bender and      et al., 2020b) neural representations is an advantage
Koller, 2020), extra-textual elements are crucial        of the proposed semantics.
in advancing our comprehension of language ac-              Learning lexical denotation. We interpret
quisition and the notion of “meaning”. While syn-        clicks   on products in the search result page, af-
thetic environments are popular ways to replicate        ter a query is issued, as a noisy “pointing” sig-
child-like abilities (Kosoy et al., 2020; Hill et al., nal (Tagliabue and Cohn-Gordon, 2019), i.e., a
2020), our work calls the attention on real-world        map between text (“shoes”) and the target domain
Information Retrieval systems as experimental set- (a portion of the product space). In other words,
tings: cooperative systems such as search engines        our approach can be seen as a neural generaliza-
offer new ways to study language grounding, in be- tion of model-theoretic semantics, where the ex-
tween the oversimplification of toy models and the       tension of “shoes” is not a discrete set of objects,
    1
                                                         but a region in the grounding space. Given a list
      Please refer to the project repository for addi-
tional information: https://github.com/coveooss/         of products clicked by shoppers after queries, we
naacl-2021-grounded-semantics.                           represent meaning through an order-invariant op-
                                                      4410
Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine
eration over product embeddings (average pooling                search for sport, not colors), and matching logs and
weighted by empirical frequencies, similar to Yu                NPs to produce the final set. Based on our experi-
et al. (2020)); following Cotter et al. (2018), we re-          ence with dozens of successful deployments in the
fer to this representation as a DeepSet. Since words            space, NPs constitute the vast majority of queries
are now grounded in a dense domain, set-theoretic               in product search: thus, even if our intent is mainly
functions for NPs (Chierchia and McConnell-Ginet,               theoretical, we highlight that the chosen types over-
2000) need to be replaced with matrix composition,              lap significantly with real-world frequencies in the
as we explain in the ensuing section.                           relevant domain. Due to the power-law distribution
   Learning functional composition. Our func-                   of queries, one-word queries are the majority of
tional composition will come from the composition               the dataset (60%); to compensate for sparsity we
of DeepSet representations, where we want to learn              perform data augmentation for rare compositional
a function f : DeepSet × DeepSet → DeepSet.                     queries (e.g. “Nike running shoes”): after we send
We address functional composition by means of                   a query to the existing search engine to get a result
two models from the relevant literature (Hartung                set, we simulate n = 500 clicks by drawing prod-
et al., 2017): one, Additive Compositional Model                ucts from the set with probability proportional to
(ADM), sums vectors together to build the final                 their overall popularity (Bianchi et al., 2021)3 .
DeepSet representation. The second model is                        The final dataset consists of 104 “activity + sor-
instead a Matrix Compositional Model (MDM):                     tal” 4 queries – “running shoes” –; 818 “brand +
given in input two DeepSets (for example, one for               sortal” queries – “Nike shoes” –, and 47 “gender +
“Nike” and one for “shoes”) the function we learn as            sortal” queries – “women shoes”; our testing data
the form M v + N u, where the interaction between               consists of 521 “brand + activity + sortal” (BAS)
the two vectors is mediated through the learning                triples, 157 “gender + activity + sortal” (GAS)
of two matrices, M and N . Since the output of                  triples, 406 “brand + gender + activity + sortal”
these processes is always a DeepSet, both models                (BGAS) quadruples.5
can be recursively composed, given the form of the
                                                                Tasks and Metrics. Our evaluation metrics are
function f .
                                                                meant to compare the real semantic representation
3    Experiments                                                of composed queries (“Nike shoes”) with the one
                                                                predicted by the tested models: in the case of the
Data. We obtained catalog data, search logs and                 proposed semantics, that means evaluating how
detailed behavioral data (anonymized product in-                it predicts the DeepSet representation of “Nike
teractions) from a partnering online shop, Shop                 shoes”, given the representation of “shoes” and
X. Shop X is a mid-size Italian website in the sport            “Nike”. Comparing target vs predicted representa-
apparel vertical2 . Browsing and search data are                tions is achieved by looking at the nearest neigh-
sampled from one season (to keep the underlying                 bors of the predicted DeepSet, as intuitively com-
catalog consistent), resulting in a total of 26, 057            plex queries behave as expected only if the two
distinct product embeddings, trained on more than               representations share many neighbors. For this
700, 000 anonymous shopping sessions. To prepare                reason, quantitative evaluation is performed using
the final dataset, we start from comparable litera-             two well-known ranking metrics: nDCG and Jac-
ture (Baroni and Zamparelli, 2010) and the analysis                 3
                                                                      Since the only objects users can click on are those re-
of linguistic and browsing behavior in Shop X, and              turned by the search box, query representation may in theory
finally distill a set of NP queries for our composi-            be biased by the idiosyncrasies of the engine. In practice, we
                                                                confirmed that the embedding quality is stable even when a
tional setting.                                                 sophisticated engine is replaced by simple Boolean queries
   In particular, we build a rich, but tractable set            over TF-IDF vectors, suggesting that any bias of this sort is
by excluding queries that are too rare (
Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine
LOBO      ADMp     M DMp      ADMv         M DMv     UM         W2V
                        nDCG       0.1821    0.2993    0.1635       0.0240   0.0024     0.0098
                        Jaccard    0.0713    0.1175    0.0450       0.0085   0.0009     0.0052

                        Table 1: Results on LOBO (bold are best, underline second best).

       ZT                                          ADMp     M DMp        ADMv      M DMv          UM      W2V
       BAS (brand + activity + sortal)
       nDCG                                        0.0810     0.0988     0.0600       0.0603     0.0312   0.0064
       Jaccard                                     0.0348     0.0383     0.0203       0.0214     0.0113   0.0023
       GAS (gender + activity + sortal)
       nDCG                                        0.0221     0.0078     0.0097       0.0160     0.0190   0.0005
       Jaccard                                     0.0083     0.0022     0.0029       0.0056     0.0052   0.0001
       BGAS (brand + gender + activity + sortal)
       nDCG                                        0.0332     0.0375     0.0118       0.0177     0.0124   0.0059
       Jaccard                                     0.0162     0.0163     0.0042       0.0061     0.0044   0.0019

                           Table 2: Results on ZT (bold are best, underline second best).

card (Vasile et al., 2016; Jaccard, 1912). We focus         to the product-space (essentially, training to predict
on two tasks: leave-one-brand-out (LOBO) and                the DeepSet representation from text). The gen-
zero-shot (ZT). In LOBO, we train models over               eralization to different and longer queries for UM
the “brand + sortal” queries but we exclude from            comes from the embeddings of the queries them-
training a specific brand (e.g., “Nike”); in the test       selves. Instead, for W2V, we learn a compositional
phase, we ask the models to predict the DeepSet             function that concatenates the two input DeepSets,
for a seen sortal and an unseen brand. For ZT we            projects them to 24 dimensions, pass them through
train models over queries with two terms (“brand            a Rectified Linear Unit, and finally project them to
+ sortal”, “activity + sortal” and “gender + sor-           the product space.7 We run every model 15 times
tal”) and see how well our semantics generalizes to         and report average results; RMSProp is the cho-
compositions like “brand + activity + sortal”; the          sen optimizer, with a batch size of 200, 20% of
complex queries that we used at test time are new           the training set as validation set and early stopping
and unseen.                                                 with patience = 10.

Models. We benchmark our semantics (tagged as               Results. Table 1 shows the results on LOBO,
p in the results table) based on ADM and MDM                with grounded models outperforming intra-textual
against three baselines: one is another grounded            ones, and prod2vec semantics (tagged as p) beat-
model, where prod2vec embeddings are replaced               ing all baselines. Table 2 reports performance for
by image embeddings (tagged as v in the results             different complex query types in the zero-shot in-
table), to test the representational capabilities of the    ference task: grounded models are superior, with
chosen domain against a well-understood modal-              the proposed model outperforming baselines across
ity – image vectors are extracted with ResNet-              all types of queries.
18, taking the average pooling of the last layer               MDM typically outperforms ADM as a com-
to obtain 512-dimensional vectors; two are intra-           position method, except for GAS, where all mod-
textual models, where word embeddings are ob-               els suffer from gender sparsity; in that case, the
tained from state-of-the-art distributional models,         best model is ADM, i.e. the one without an im-
BERT (UM) (the Umberto model6 ) and Word2Vec                plicit bias from the training. In general, grounded
(W2V), trained on textual metadata from Shop X              models outperform intra-textual models, often by
catalog. For UM, we extract the 768 dimensional             a wide margin, and prod2vec-based semantics out-
representation from the [CLS] embedding of the              performs image-based semantics, proving that the
12th layer of the query and learn a linear projection
                                                                7
                                                                First results with the same structure as ADM and MDM
   6
   https://huggingface.co/Musixmatch/                       showed very low performances, thus we made the architecture
umberto-commoncrawl-cased-v1                                more complex and non-linear.
                                                       4412
“nike”   “shoes”

                  Search Box   nike shoes     DeepSet                                                  Good
                                              Composition

                                                      “adidas”       “shoes”

                                              DeepSet
                  Search Box   adidas shoes                                                            Good
                                              Composition

                                                        “nike”       “shirt”
                                                                                                       A nike jacket.
                                              DeepSet
                  Search Box   nike shirt
                                              Composition                                              Not completely
                                                                                                       wrong

                Figure 1: Examples of qualitative predictions made by MDM on the LOBO task.

chosen latent grounding domain supports rich rep-                              NOT Nike”); second, we plan to improve our rep-
resentational capabilities. The quantitative evalua-                           resentational capabilities, either through symbolic
tions were confirmed by manually inspecting near-                              knowledge or more discerning embedding strate-
est neighbors for predicted DeepSets in the LOBO                               gies; third, we wish to explore transformer-based
setting – as an example, MDM predicts for “Nike                                architectures (Lee et al., 2019) as an alternative
shoes” a DeepSet that has (correctly) all shoes as                             way to produce set-like representations.
neighbors in the space, while, for the same query,                                We conceived our work as a testable application
UM suggests shorts as the answer. Figure 1 shows                               of a broader methodological stance, loosely follow-
some examples of compositions obtained by the                                  ing the agenda of the child-as-hacker (Rule et al.,
MDM model on the LOBO task; the last exam-                                     2020) and child-as-scientist (Gopnik, 2012) pro-
ple shows that the model, given in input the query                             grams. Our “search-engine-as-a-child” metaphor
“Nike shirt”, does not reply with a shirt, but with a                          may encourage the use of abundant real-world
Nike jacket: even if the correct meaning of “shirt”                            search logs to test computational hypotheses
was not exactly captured in this contest, the model                            about language learning inspired by cognitive sci-
ability to identify a similar item is remarkable.                              ences (Carey and Bartlett, 1978).

4   Conclusions and Future Work                                                Acknowledgments
In the spirit of Bisk et al. (2020), we argued for
                                                      We wish to thank Christine Yu, Patrick John Chia
grounding linguistic meaning in artificial systems
                                                      and the anonymous reviewers for useful comments
through experience. In our implementation, all the
                                                      on a previous draft. Federico Bianchi is a mem-
important pieces – domain, denotation, composi-
                                                      ber of the Bocconi Institute for Data Science and
tion – are learned from behavioral data. By ground-
                                                      Analytics (BIDSA) and the Data and Marketing
ing meaning in (a representation of) objects and
                                                      Insights (DMI) unit.
their properties, the proposed noun phrase seman-
tics can be learned “bottom-up” like distributional   Ethical Considerations
models, but can generalize to unseen examples, like
traditional symbolic models: the implicit, dense      User data has been collected in the process
structure of the domain (e.g. the relative position   of providing business services to the clients
in the space of Nike products and shoes) underpins    of Coveo: user data is collected and processed in
the explicit, discrete structure of queries picking   an anonymized fashion, in full compliance with ex-
objects in that domain (e.g. “Nike shoes”) – in       isting legislation (GDPR). In particular, the target
other words, compositionality is an emergent phe- dataset uses only anonymous uuids to label sessions
nomenon. While encouraging, our results are still     and, as such, it does not contain any information
preliminary: first, we plan on extending our seman- that can be linked to individuals.
tics, starting with Boolean operators (e.g. “shoes
                                                   4413
References                                               Nicole J M Fitzgerald and Jacopo Tagliabue. 2020. On
                                                           the plurality of graphs. In NETREASON Workshop
Marco Baroni and Roberto Zamparelli. 2010. Nouns           at 24th European Conference on Artificial Intelli-
 are vectors, adjectives are matrices: Representing        gence.
 adjective-noun constructions in semantic space. In
 Proceedings of the 2010 Conference on Empirical         Alison Gopnik. 2012. Scientific thinking in young
 Methods in Natural Language Processing, pages             children: Theoretical advances, empirical research,
 1183–1193, Cambridge, MA. Association for Com-            and policy implications. Science (New York, N.Y.),
 putational Linguistics.                                   337:1623–7.
Emily M. Bender and Alexander Koller. 2020. Climb-       Mihajlo Grbovic, Vladan Radosavljevic, Nemanja
  ing towards NLU: On meaning, form, and under-            Djuric, Narayan Bhamidipati, Jaikit Savla, Varun
  standing in the age of data. In Proceedings of the       Bhagwan, and Doug Sharp. 2015. E-commerce in
  58th Annual Meeting of the Association for Compu-        your inbox: Product recommendations at scale. In
  tational Linguistics, pages 5185–5198, Online. As-       Proceedings of KDD ’15.
  sociation for Computational Linguistics.
                                                         William L. Hamilton, Payal Bajaj, Marinka Zitnik, Dan
Federico Bianchi, Jacopo Tagliabue, and Bingqing Yu.       Jurafsky, and Jure Leskovec. 2018. Embedding log-
  2021. Query2Prod2Vec: Grounded Word Embed-               ical queries on knowledge graphs. In NeurIPS.
  dings for eCommerce. In NAACL-HLT. Association
  for Computational Linguistics.                         Matthias Hartung, Fabian Kaupmann, Soufian Jebbara,
                                                          and Philipp Cimiano. 2017. Learning composition-
Federico Bianchi, Jacopo Tagliabue, Bingqing Yu,          ality functions on word embeddings for modelling
  Luca Bigon, and Ciro Greco. 2020. Fantastic embed-      attribute meaning in adjective-noun phrases. In Pro-
  dings and how to align them: Zero-shot inference in     ceedings of the 15th Conference of the European
  a multi-shop scenario. In Proceedings of the SIGIR      Chapter of the Association for Computational Lin-
  2020 eCom workshop, July 2020, Virtual Event, pub-      guistics: Volume 1, Long Papers, pages 54–64.
  lished at http://ceur-ws.org (to appear).
                                                         Irene Heim and Angelika Kratzer. 1998. Semantics in
Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob           Generative Grammar. Wiley-Blackwell.
  Andreas, Yoshua Bengio, Joyce Chai, Mirella Lap-
  ata, Angeliki Lazaridou, Jonathan May, Aleksandr       Felix Hill, O. Tieleman, Tamara von Glehn,
  Nisnevich, Nicolas Pinto, and Joseph Turian. 2020.       N. Wong, Hamza Merzic, and Stephen Clark.
  Experience grounds language. In Proceedings of the       2020. Grounded language learning fast and slow.
  2020 Conference on Empirical Methods in Natural          ArXiv, abs/2009.01719.
  Language Processing (EMNLP), pages 8718–8735,
  Online. Association for Computational Linguistics.     Paul Jaccard. 1912. The distribution of the flora in the
                                                           alpine zone.1. New Phytologist, 11(2):37–50.
S. Carey and E. Bartlett. 1978. Acquiring a single new
   word.                                                 Eliza Kosoy, Jasmine Collins, David M. Chan, Jes-
                                                            sica B. Hamrick, Sandy H. Huang, A. Gopnik, and
Maxime Chevalier-Boisvert, Dzmitry Bahdanau,                J. Canny. 2020. Exploring exploration: Comparing
 Salem Lahlou, L. Willems, Chitwan Saharia,                 children with RL agents in unified environments. In
 T. Nguyen, and Yoshua Bengio. 2019. BabyAI: A              ICLR Workshop on "Bridging AI and Cognitive Sci-
 platform to study the sample efficiency of grounded        ence".
 language learning. In ICLR.
                                                         Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John-
Gennaro Chierchia and Sally McConnell-Ginet. 2000.         son, Kenji Hata, Joshua Kravitz, Stephanie Chen,
  Meaning and Grammar (2nd Ed.): An Introduction           Yannis Kalantidis, Li-Jia Li, David A. Shamma,
  to Semantics. MIT Press, Cambridge, MA, USA.             Michael S. Bernstein, and Li Fei-Fei. 2016. Vi-
                                                           sual genome: Connecting language and vision us-
Andrew Cotter, Maya R. Gupta, Heinrich Jiang, James        ing crowdsourced dense image annotations. Interna-
  Muller, Taman Narayan, Serena Wang, and Tao              tional Journal of Computer Vision, 123:32–73.
  Zhu. 2018. Interpretable set functions. ArXiv,
  abs/1806.00050.                                        Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Ko-
                                                           siorek, Seungjin Choi, and Yee Whye Teh. 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and              Set transformer: A framework for attention-based
   Kristina Toutanova. 2019. BERT: Pre-training of         permutation-invariant neural networks. In Proceed-
   deep bidirectional transformers for language under-     ings of the 36th International Conference on Ma-
   standing. In Proceedings of the 2019 Conference         chine Learning, volume 97 of Proceedings of Ma-
   of the North American Chapter of the Association        chine Learning Research, pages 3744–3753. PMLR.
   for Computational Linguistics: Human Language
  Technologies, Volume 1 (Long and Short Papers),        Di Lu, Spencer Whitehead, Lifu Huang, Heng Ji, and
   pages 4171–4186, Minneapolis, Minnesota. Associ-        Shih-Fu Chang. 2018. Entity-aware image caption
   ation for Computational Linguistics.                    generation. In EMNLP.
                                                     4414
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor-       Michael Tomasello. 2003. Constructing a Language:
  rado, and Jeffrey Dean. 2013. Distributed represen-      A Usage-Based Theory of Language Acquisition.
  tations of words and phrases and their composition-      Harvard University Press, Cambridge, MA.
  ality. In Proceedings of the 26th International Con-
  ference on Neural Information Processing Systems       Flavian Vasile, Elena Smirnova, and Alexis Conneau.
  - Volume 2, NIPS’13, page 3111–3119, Red Hook,            2016. Meta-prod2vec: Product embeddings using
  NY, USA. Curran Associates Inc.                           side-information for recommendation. In Proceed-
                                                            ings of the 10th ACM Conference on Recommender
Cun Mu, Guang Yang, and Zheng Yan. 2018. Revis-             Systems.
  iting skip-gram negative sampling model with regu-
  larization. In Proceedings of the 2019 Computing       Fei Xu and Joshua B. Tenenbaum. 2000. Word learning
  Conference.                                              as bayesian inference. In In Proceedings of the 22nd
                                                           Annual Conference of the Cognitive Science Society,
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt            pages 517–522. Erlbaum.
 Gardner, Christopher Clark, Kenton Lee, and Luke
                                                         Bingqing Yu, Jacopo Tagliabue, Ciro Greco, and
 Zettlemoyer. 2018. Deep contextualized word rep-
                                                           Federico Bianchi. 2020. “An Image is Worth a
 resentations. In Proceedings of the 2018 Confer-
                                                           Thousand Features”: Scalable Product Represen-
 ence of the North American Chapter of the Associ-
                                                           tations for In-Session Type-Ahead Personalization.
 ation for Computational Linguistics: Human Lan-
                                                           In Companion Proceedings of the Web Conference
 guage Technologies, Volume 1 (Long Papers), pages
                                                           2020, WWW ’20, page 461–470, New York, NY,
 2227–2237, New Orleans, Louisiana. Association
                                                           USA. Association for Computing Machinery.
 for Computational Linguistics.
                                                         Haonan Yu, H. Zhang, and W. Xu. 2018. Interactive
W. V. O. Quine. 1960. Word & Object. MIT Press.            grounded language acquisition and generalization in
                                                           a 2d world. In ICLR.
Dana Rubinstein, Effi Levi, Roy Schwartz, and Ari
  Rappoport. 2015. How well do distributional mod-
  els capture different types of semantic knowledge?
  In Proceedings of the 53rd Annual Meeting of the
  Association for Computational Linguistics and the
  7th International Joint Conference on Natural Lan-
  guage Processing (Volume 2: Short Papers), pages
  726–730, Beijing, China. Association for Computa-
  tional Linguistics.

Joshua S Rule, Joshua B Tenenbaum, and Steven T Pi-
   antadosi. 2020. The child as hacker. Trends in cog-
   nitive sciences, 24(11):900—915.

Agnieszka Słowik, Abhinav Gupta, William L. Hamil-
  ton, Mateja Jamnik, Sean B. Holden, and Christo-
  pher Joseph Pal. 2020. Exploring structural in-
  ductive biases in emergent communication. ArXiv,
  abs/2002.01335.

Jacopo Tagliabue and Reuben Cohn-Gordon. 2019.
   Lexical learning as an online optimal experiment:
   Building efficient search engines through human-
   machine collaboration. ArXiv, abs/1910.14164.

Jacopo Tagliabue, Bingqing Yu, and Marie Beaulieu.
   2020a. How to grow a (product) tree: Personalized
   category suggestions for eCommerce type-ahead. In
   Proceedings of The 3rd Workshop on e-Commerce
   and NLP, pages 7–18, Seattle, WA, USA. Associa-
   tion for Computational Linguistics.

Jacopo Tagliabue, Bingqing Yu, and Federico Bianchi.
   2020b. The embeddings that came in from the
   cold: Improving vectors for new and rare products
   with content-based inference. In Fourteenth ACM
   Conference on Recommender Systems, RecSys ’20,
   page 577–578, New York, NY, USA. Association for
   Computing Machinery.
                                                     4415
You can also read