Cross-Cultural Similarity Features for Cross-Lingual Transfer Learning of Pragmatically Motivated Tasks

Page created by Sean Johnston
 
CONTINUE READING
Cross-Cultural Similarity Features for Cross-Lingual Transfer Learning
                     of Pragmatically Motivated Tasks
                         Jimin Sun1* Hwijeen Ahn2* Chan Young Park3*
                                Yulia Tsvetkov3 David R. Mortensen3
                             1
                               Seoul National University, Republic of Korea
                                 2
                                   Sogang University, Republic of Korea
                  3
                    Language Technologies Institute, Carnegie Mellon University, USA
                     jiminsun@dm.snu.ac.kr, hwijeen@sogang.ac.kr,
                      {chanyoun, ytsvetko, dmortens}@cs.cmu.edu

                         Abstract                               on semantic and pragmatic differences across lan-
                                                                guages.1 We devise a new distance measure be-
      Much work in cross-lingual transfer learning              tween languages based on linguistic proxies of cul-
      explored how to select better transfer lan-
                                                                ture. We hypothesize that it can be used to se-
      guages for multilingual tasks, primarily fo-
      cusing on typological and genealogical simi-              lect transfer languages and improve cross-lingual
      larities between languages. We hypothesize                transfer learning, specifically in pragmatically-
      that these measures of linguistic proximity are           motivated tasks such as sentiment analysis, since
      not enough when working with pragmatically-               expressions of subtle sentiment or emotion—such
      motivated tasks, such as sentiment analysis.              as subjective well-being (Smith et al., 2016), anger
      As an alternative, we introduce three linguistic          (Oster, 2019), or irony (Karoui et al., 2017)—have
      features that capture cross-cultural similarities         been shown to vary significantly by culture.
      that manifest in linguistic patterns and quantify
      distinct aspects of language pragmatics: lan-
                                                                   We focus on three distinct aspects in the intersec-
      guage context-level, figurative language, and             tion of language and culture, and propose features
      the lexification of emotion concepts. Our anal-           to operationalize them. First, every language and
      yses show that the proposed pragmatic features            culture rely on different levels of context in com-
      do capture cross-cultural similarities and align          munication. Western European languages are gen-
      well with existing work in sociolinguistics and           erally considered low-context languages, whereas
      linguistic anthropology. We further corrobo-              Korean and Japanese are considered high-context
      rate the effectiveness of pragmatically-driven
                                                                languages (Hall, 1989). Second, similar cultures
      transfer in the downstream task of choosing
      transfer languages for cross-lingual sentiment            construct and construe figurative language simi-
      analysis.                                                 larly (Casas and Campoy, 1995; Vulanović, 2014).
                                                                Finally, emotion semantics is similar between lan-
 1    Introduction                                              guages that are culturally-related (Jackson et al.,
                                                                2019). For example, in Persian, ‘grief’ and ‘regret’
  Hofstede et al. (2005) defined culture as the col-            are expressed with the same word whereas ‘grief’
 lective mind which “distinguishes the members                  is co-lexified with ‘anxiety’ in Dargwa. There-
 of one group of people from another.” Cultural                 fore, Persian speakers may perceive ‘grief’ as more
 idiosyncrasies affect and shape people’s beliefs               similar to ‘regret,’ while Dargwa speakers may as-
 and behaviors. Linguists have particularly focused             sociate the concept with ‘anxiety.’
 on the relationship between culture and language,                 We validate the proposed features qualitatively,
 revealing in qualitative case studies how cultural             and also quantitatively by an extrinsic evaluation
 differences are manifested as linguistic variations            method. We first analyze each linguistic feature
 (Siegel, 1977).
                                                                    1
    Quantifying cross-cultural similarities from lin-                 In linguistics, pragmatics has both a broad and a narrow
                                                                sense. Narrowly, the term refers to formal pragmatics. In the
 guistic patterns has largely been unexplored in                broad sense, which we employ in this paper, pragmatics refers
 NLP, with the exception of studies that focused on             to contextual factors in language use. We are particularly con-
 cross-cultural differences in word usage (Garimella            cerned with cross-cultural pragmatics and finding quantifiable
                                                                linguistic measures that correspond to aspects of cultural con-
 et al., 2016; Lin et al., 2018). In this work, we              text. These measures are not the cultural characteristics that
 aim to quantify cross-cultural similarity, focusing            would be identified by anthropological linguists themselves
                                                                but are rather intended to be measurable correlates of these
     *The first three authors contributed equally.              characteristics.

                                                           2403
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pages 2403–2414
                            April 19 - 23, 2021. ©2021 Association for Computational Linguistics
to confirm that they capture the intended cultural       transfer language ltf , which leads to the following
patterns. We find that the results corroborate the       definition of LCR:
existing work in sociolinguistics and linguistic an-
thropology. Next, as a practical application of our         LCR-pron(ltf , ltg ) = ptr(ltg )/ptr(ltf )
features, we use them to rank transfer languages            LCR-verb(ltf , ltg ) = vtr(ltg )/vtr(ltf )
for cross-lingual transfer learning. Lin et al. (2019)
have shown that selecting the right set of transfer      Literal Translation Quality Similar cultures
languages with syntactic and semantic language-          tend to share similar figurative expressions, includ-
level features can significantly boost the perfor-       ing idiomatic multiword expressions (MWEs) and
mance of cross-lingual models. We incorporate our        metaphors (Kövecses, 2003, 2010). For example,
features into Lin et al. (2019)’s ranking model to       like father like son in English can be translated
evaluate the new cultural features’ utility in select-   word-by-word into a similar idiom tel père tel fils
ing better transfer languages. Experimental results      in French. However, in Japanese, a similar idiom
show that incorporating the features improves the        蛙の子は蛙 (Kaeru no ko wa kaeru) “A frog’s
performance for cross-lingual sentiment analysis,        child is a frog.” cannot be literally translated.
but not for dependency parsing. These results sup-          Literal translation quality (LTQ) feature quanti-
port our hypothesis that cultural features are more      fies how well a given language pair’s MWEs are
helpful when the cross-lingual task is driven by         preserved in literal (word-by-word) translation, us-
pragmatic knowledge. 2                                   ing a bilingual dictionary. A well-curated list of
                                                         MWEs is not available for the majority of lan-
2       Pragmatically-motivated Features                 guages. We thus follow an automatic extraction
We propose three language-level features that quan-      approach of MWEs (Tsvetkov and Wintner, 2010).
tify the cultural similarities across languages.         First, a variant of pointwise mutual information,
                                                         PMI3 (Daille, 1994) is used to extract noisy lists of
Language Context-level Ratio A language’s
                                                         top-scoring n-grams from two large monolingual
context-level reflects the extent to which the lan-
                                                         corpora from different domains, and intersecting
guage leaves the identity of entities and predi-
                                                         the lists filters out domain-specific n-grams and
cates to context. For example, an English sen-
                                                         retains the language-specific top-k MWEs. Then,
tence Did you eat lunch? explicitly indicates the
                                                         a bilingual dictionary between ltf and ltg and a
pronoun you, whereas the equivalent Korean sen-
                                                         parallel corpus between the pair are used. 3 For
tence ᄌᆷᄉ
       ᅥ ᆷᄆ
         ᅵ  ᆨᄋ
            ᅥ  ᆻᄂ
               ᅥ  ᅵ? (= Did eat lunch?) omits the
                                                         each n-gram in ltg ’s MWEs, we search for its lit-
pronoun. Context-level is considered one of the
                                                         eral translations extracted using the dictionary in
distinctive attributes of a language’s pragmatics
                                                         parallel sentences containing the n-gram. For any
in linguistics and communication studies, and if
                                                         word in the n-gram, if there is a translation in the
two languages have similar levels of context, their
                                                         parallel sentence, we consider this as hit, otherwise
speakers are more likely to be from similar cultures                                                     hit
                                                         as miss. And we calculate hit ratio as (hit+miss)
(Nada et al., 2001).
                                                         for each n-gram found in the parallel corpus. Fi-
   The language context-level ratio (LCR) feature
                                                         nally, we average the hit ratios of all n-grams and
approximates this linguistic quality. We compute
                                                         z-normalize over the transfer languages to obtain
the pronoun- and verb-token ratio, ptr(lk ) and
                                                         LTQ(ltf , ltg ).
vtr(lk ) for each language lk , using part-of-speech
tagging results. We first run language-specific POS-     Emotion Semantics Distance Emotion seman-
taggers over a large mono-lingual corpus for each        tic distance (ESD) measures how similarly emo-
language. Next, we compute ptr as the ratio of           tions are lexicalized across languages. This is in-
count of pronouns in the corpus to the count of          spired by Jackson et al. (2019) who used colexi-
all tokens. vtr is obtained likewise with verb to-       fication patterns (i.e., when different concepts are
kens. Low ptr, vtr values may indicate that a            expressed using the same lexical item) to capture
language leaves the identity of entities and predi-      the semantic similarity of languages. However,
cates, respectively, to context. We then compare         colexification patterns require human annotation,
these values between the target language ltg and            3
                                                             While dictionaries and parallel corpora are not available
    2
   Code and data are publicly available at https://      for many languages, they are easier to obtain than the task-
github.com/hwijeen/langrank.                             specific annotations of MWEs.

                                                    2404
and existing annotations may not be comprehen-
sive. We extend Jackson et al. (2019)’s method by
using cross-lingual word embeddings.
   We define ESD as the average distance of emo-
tion word vectors in transfer and target languages,
after aligning word embeddings into the same
space. More specifically, we use 24 emotion con-
cepts defined in Jackson et al. (2019) and use bilin-
gual dictionaries to expand each concept into ev-
ery other language (e.g., love and proud to Liebe
and stolz in German). We then remove the emo-
tion word pairs from the bilingual dictionaries, and
use the remaining pairs to align word embeddings
of source into the space of target languages. We
                                                        Figure 1: Plot of languages in ptr and vtr plane.
hypothesize that if words correspond to the same        Languages are color-coded according to the cultural
emotion concept in different languages (e.g., proud     areas defined in Siegel (1977).
and stolz) have similar meaning, they should be
aligned to the same point despite the lack of super-
vision. However, because each language possesses        3.2   LCR and Language Context-level
different emotion semantics, emotions are scattered     ptr approximates how often discourse entities
into different positions. We thus define ESD as the     are indexed with pronouns rather than left conjec-
average cosine distance between languages:              turable from context. Similarly, vtr estimates the
                       X                                rate at which predicates appear explicitly as verbs.
     ESD(ltf , ltg ) =    cos(vtf,e , vtg,e )/|E|       In order to examine to which extent these features
                     e∈E
                                                        reflect context-levels, we plot languages on a two-
where E is the set of emotion concepts and vtf,e is     dimensional plane where the x-axis indicates ptr
the aligned emotion word vector of language ltf .       and the y-axis indicates vtr in Figure 1.
                                                           The plot reveals a clear pattern of context-levels
3     Feature Analysis                                  in different languages. Low-context languages
                                                        such as German and English (Hall, 1989) possess
In this section, we evaluate the proposed
                                                        the largest values of ptr. On the other extreme are
pragmatically-motivated features intrinsically.
                                                        located Korean and Japanese with low ptr, which
Throughout the analyses, we use 16 languages
                                                        are representative of high-context languages. One
listed in Figure 4 which are later used for extrinsic
                                                        thing to notice is the isolated location of Turkish
evaluation (§5).
                                                        with a high vtr. This is morphosyntactically plau-
3.1    Implementation Details                           sible as a lot of information is expressed by the
                                                        affixation to verbs in Turkish.
We used multilingual word tokenizers from NLTK
and RDR POS Tagger (Nguyen et al., 2014) for            3.3   LTQ and MWEs
most of the languages except for Arabic, Chinese,
Japanese, and Korean, where we used PyArabic,           LTQ uses n-grams with high PMI scores as prox-
Jieba, Kytea, and Mecab, respectively. For mono-        ies for figurative language MWE (PMI MWEs).
lingual corpora, we used the news-crawl 1M cor-         We evaluate the quality of selected MWEs and the
pora from Leipzig (Goldhahn et al., 2012) for both      resulting LTQ by comparing with human-curated
LCR and LTQ. We used bilingual dictionaries from        list of figurative language MWE (gold MWEs)
Choe et al. (2020) and TED talks corpora (Qi et al.,    that are available in some languages. We col-
2018) for both parallel corpora and an additional       lected gold MWEs in multiple languages from
monolingual corpus for LTQ. We focused on bi-           Wiktionary4 . We discarded languages with less
grams and trigrams and set k, the number of ex-         than 2,000 phrases on the list, resulting in four
tracted MWEs, to 500. We followed Lample et al.         languages (English, French, German, Spanish) for
(2018) to generate the supervised cross-lingual           4
                                                            For example, https://en.wiktionary.org/
word embeddings for ESD.                                wiki/Category:English_idioms

                                                   2405
(a) Network based on Emotion Semantics Distance.        (b) Network based on syntactic distance.

Figure 2: Network of languages color-coded by their cultural areas. An edge is added between the two languages
if a language is ranked in the top-2 closest languages of the other language in terms of feature value.

analysis.                                                     good and love in the Nakh-Daghestanian language
   First, we check how many PMI MWEs are ac-                  family. Our results derived from ESD do not rely
tually in the gold MWEs. Out of the top-500 PMI               on colexification patterns, but also support this find-
bigrams and trigrams, 19.0% of bigrams and 3.8%               ing. The nearest neighbors of the Chinese word for
of trigrams are included in the gold MWE list (av-            hope was want and pity, while they were found as
eraged over four languages). For example, the tri-            love and joy for hope in Arabic.
grams in the PMI MWEs, keep an eye and take into                 In Figure 2, we compare ESD to the syntactic
account, are considered to be in the gold MWEs                distance between languages by constructing two
as keep an eye peeled and take into account are in            networks of languages based on each feature. Fig-
the list. The seemingly low percentages are reason-           ure 2a uses ESD as reference while Figure 2b uses
able, regarding that the PMI scores are designed to           the syntactic distance from the URIEL database
extract collocations patterns rather than figurative          (Littell et al., 2017). Each node represents a lan-
languages themselves.                                         guage, color-coded by its cultural area. For each
   Secondly, to validate using PMI MWEs as prox-              language, we sort the other languages according to
ies, we compare the LTQ of PMI MWEs with the                  the distance value. When a language is in the list of
LTQ using gold MWEs. Specifically, we obtained                top-k closest languages, we draw an edge between
the LTQ scores of each language pair with target              the two. We set k = 2.
languages limited to the four European languages                 We see that languages in the same cultural areas
mentioned above. Then for each target language,               tend to form more cohesive clusters in Figure 2a
we measured Pearson correlation coefficient be-               compared to Figure 2b. The portion of edges within
tween the two LTQ scores based on the two MWE                 the cultural areas is 76% for ESD while it is 59%
lists. The average coefficient was 0.92, which indi-          for syntactic distance. These results indicate that
cates a strong correlation between the two resulting          ESD effectively extracts linguistic information that
LTQ scores, and thus justifies using PMI MWEs                 aligns well with the commonly shared perception
for all other languages.                                      of cultural areas.

3.4    ESD and Cultural Grouping                              3.5    Correlation with Geographical Distance
We investigate what is carried by ESD by visualiz-            Regarding the language clusters in Figure 2a, some
ing and looking at the nearest neighbors of emotion           may suspect that geographic distance can substitute
vectors.5 Jackson et al. (2019) used word colex-              the pragmatically-inspired features. For Chinese,
ification patterns to reveal that the same emotion            Korean and Japanese are the closest languages by
concepts cluster with different emotions according            ESD, which can also be explained by their geo-
to the language family they belong to. For instance,          graphical proximity. Do our features add additional
in Tai-Kadai languages, hope appears in the same              pragmatic information, or can they simply be re-
cluster as want and pity, while hope associates with          placed by geographical distance?
  5
    A visualization demo of emotion vectors can be found at     To verify this speculation, we evaluate Pearson’s
https://bit.ly/emotion_vecs.                                  correlation coefficient of each pragmatic feature

                                                          2406
value with geographical distance from URIEL. The
feature with the strongest correlation was ESD
(r=0.4). The least correlated was LCR-verb
(r=0.03), followed by LCR-pron (r=0.17) and
                                                                              Ranking
LTQ (r=−0.31)6 . The results suggest that the
                                                                               Model
pragmatic features contain extra information that
cannot be subsumed by geographic distance.
                                                          Figure 3: Illustration of transfer language ranking
4     Extrinsic Evaluation: Ranking                       problem when the target language is French (fr) and
      Transfer Languages                                  there are three available transfer languages: Arabic
                                                          (ar), Russian (ru), and Chinese (zh). The output
To demonstrate the utility of our features, we ap-        ranking r̂ fr is compared to the ground truth ranking r fr
ply them to a transfer language ranking task for          which is determined by the zero-shot performance z
cross-lingual transfer learning. We first present the     of cross-lingual models.
overall task setting, including the datasets and mod-
els used for the two cross-lingual tasks. Next, we               West Europe:            East Europe:
describe the transfer language ranking model and                  Dutch (1089)            Czech (54540)
its evaluation metrics.                                           English (1472)          Polish (26284)
                                                                  French (20771)          Russian (2289)
                                                                  German (56333)
4.1    Task Setting                                               Spanish (1396)         Middle East:
We define our task as the language ranking prob-                                           Arabic (4111)
lem: given the target language ltg , we want                     East Asia:                Persian (3904)
                                                                                           Turkish (907)
to rank a set of n candidate transfer languages                   Chinese (2333)
          (1)        (n)                                          Japanese (21095)
Ltf ={ltf , . . . , ltf } by their usefulness when                Korean (18000)
                                                                                         South Asia:
transferred to ltg , which we refer to as transfer-                                       Hindi (2707)
                                                                                          Tamil (417)
ability (illustrated in Figure 3). The effectiveness
of cross-lingual transfer is often measured by eval-
                                                          Figure 4: Languages used throughout the experiments
uating the joint training or zero-shot transfer per-
                                                          are grouped by their cultural areas (Siegel, 1977). The
formance (Wu and Dredze, 2019; Schuster et al.,           numbers indicate the size of each dataset.
2019). In this work, we quantify the effectiveness
as the zero-shot transfer performance, following
Lin et al. (2019). Our goal is to train a model that      few. The ranking model takes f tf,tg of every ltf as
ranks available transfer languages in Ltf by their        input, and predicts the transferability ranking rbtg .
transferability for a target language ltg .               Using r tg from the previous step as training data,
   To train the ranking model, we first need to find      the model learns to find optimal transfer languages
the ground-truth transferability rankings, which          based on f tf,tg . The trained model can either be
operate as the model’s training data. We evaluate         used to select the optimal set of transfer languages,
the zero-shot performance ztf,tg by training a task-      or to decide which language to additionally anno-
specific cross-lingual model solely with transfer         tate during the data creation process.
language ltf and testing on ltg . After evaluating
                                                          4.2   Task & Dataset
ztf,tg for each candidate transfer language in Ltf ,
we obtain the optimal ranking of languages r tg by        We apply the proposed features to train a rank-
sorting languages according to the measured ztf,tg .      ing model for two distinctive tasks: multilingual
Note that r tg also depends on downstream task.           sentiment analysis (SA) and multilingual depen-
   Next, we train the language ranking model. The         dency parsing (DEP). The tasks are chosen based
ranking model predicts the transfer ranking of can-       on our hypothesis that high-order information such
didate languages. Each source, target pair (ltf , ltg )   as pragmatics would assist sentiment analysis while
is represented as a vector of language features           it may be less significant for dependency parsing,
f tf,tg , which may include phonological similar-         where lower-order information such as syntax is
ity, typological similarity, word-overlap to name a       relatively stressed.
   6
     When two languages are more similar, LTQ is higher   SA As there is no single sentiment analysis
whereas geographic distance is smaller.                   dataset covering a wide variety of languages, we

                                                      2407
collected various review datasets from different                  relevant languages for computing MAP are defined
sources.7 All samples are labeled as either posi-                 as the top-k languages in terms of zero-shot perfor-
tive or negative. In case of datasets rated with a                mance in the downstream task. In our experiments,
five-point Likert scale, we mapped 1–2 to negative                we set k to 3 for MAP. Similarly, we use NDCG@3.
and 4–5 to positive. We settled on a dataset consist                 We train and evaluate the model using leave-one-
of 16 languages categorized into five distinct cul-               out cross-validation: where one language is set
tural groups: West Europe, East Europe, East Asia,                aside as the test language while other languages
South Asia, and Middle East (Figure 4).                           are used to train the ranking model. Among the
                                                                  training languages, each language is posited in turn
DEP To compare the effectiveness of the pro-
                                                                  as the target language while others are the transfer
posed features on syntax-focused tasks, we chose
                                                                  languages.
datasets of the same set of 16 languages from Uni-
versal Dependencies v2.2 (Nivre et al., 2018).                    5     Experiments
4.3    Task-Specific Cross-Lingual Models                         5.1    Baselines
SA Multilingual BERT (mBERT) (Devlin et al.,                      L ANG R ANK L ANG R ANK (Lin et al., 2019) uses
2019), a multilingual extension of BERT pretrained                13 features to train the ranking model: The dataset
with 104 different languages, has shown strong                    size in transfer language (tf size), target lan-
results in various text classification tasks in cross-            guage (tg size), and the ratio between the two
lingual settings (Sun et al., 2019; Xu et al., 2019;              (ratio size); Type-token-ratio (ttr) which
Li et al., 2019). We use mBERT to conduct zero-                   measures lexical diversity and word overlap
shot cross-lingual transfer and to extract optimal                for lexical similarity between a pair of languages;
transfer language rankings: fine-tune mBERT on                    various distances between a language pair from the
transfer language data and test it on target language             URIEL database (geographic geo, genetic gen,
data. The performance is measured by the macro                    inventory inv, syntactic syn, phonological phon
F1 score on the test set.                                         and featural feat).
DEP We adopt the setting from Ahmad et al.                        MTV EC Malaviya et al. (2017) proposed to
(2018) to perform cross-lingual zero-shot transfer.               learn a language representation while training a
We train deep biaffine attentional graph-based mod-               neural machine translation (NMT) system in a sim-
els (Dozat and Manning, 2016) which achieved                      liar fashion to Johnson et al. (2017). During train-
state-of-the-art performance in dependency parsing                ing, a language token is prepended to the source
for many languages. The performance is evaluated                  sentence and the learned token’s embedding be-
using labeled attachment scores (LAS).                            comes the language vector. Bjerva et al. (2019)
                                                                  has shown that such language representations con-
4.4    Ranking Model & Evaluation                                 tain various types of linguistic information ranging
Ranking Model For the language ranking model,                     from word order to typological information. We
we employ gradient boosted decision trees, Light-                 used the one released by Malaviya et al. (2017)
GBM (Ke et al., 2017), which is one of the state-                 which has the dimension of 512.
of-the-art models for ranking tasks.8
                                                                  5.2    Individual Feature Contribution
Ranking Evaluation Metric We evaluate the                         We first look into whether the proposed features are
ranking models’ performance with two standard                     helpful in ranking transfer languages for sentiment
metrics for ranking tasks: Mean Average Preci-                    analysis and dependency parsing (Table 1). We
sion (MAP) and Normalized Discounted Cumula-                      add all three features (P RAG) to the two baseline
tive Gain at position p (NDCG@p) (Järvelin and                   features (L ANG R ANK, MTV EC) and compare the
Kekäläinen, 2002). While MAP assumes a binary                   performance in the two tasks. Results show that our
concept of relevance, NDCG is a more fine-grained                 features improve both baselines in SA, implying
measure that reflects the ranking positions. The                  that the pragmatic information captured by our fea-
   7
      Details are provided in Appendix A. Note that the differ-   tures is helpful for discerning the subtle differences
ence in domain and label distribution of data can also affect     in sentiment among languages.
the transferability, and a related discussion is in §5.4
    8
      More details on the cross-lingual models, ranking model,       In the case of DEP, including our features brings
and their training can be found in Appendix B.                    inconsistent results to performance. The features

                                                             2408
SA          DEP                                            SA         DEP
                     MAP      NDCG    MAP NDCG                                   MAP     NDCG   MAP NDCG
L ANG R ANK          71.3     86.5    63.0    82.2           Pretrain-specific   39.0    55.5    -      -
L ANG R ANK +P RAG   76.0     90.9    61.7    80.5           Data-specific       68.0    85.4   37.2   55.0
  - LCR              75.0     88.3    60.3    79.6           Typology            44.9    60.7   58.0   79.8
  - LTQ              72.4     89.3    63.1*   81.3*          Geography           24.9    55.0   32.3   65.1
  - ESD              77.7*    92.1*   58.2    78.5           Orthography         34.2    56.6   35.5   60.5
MTV EC               71.1     89.5    43.0    69.7           Pragmatic           73.0    88.0   46.5   71.8
MTV EC +P RAG        74.3     90.8    49.7    74.8
 - LCR               72.9     90.1    54.1*   76.3*      Table 2: Ranking performance using each feature
 - LTQ               71.2     89.0    53.0*   78.6*      group as input to the ranking model.
 - ESD               73.1     90.7    45.3    73.9

Table 1: Evaluation results of our features (P RAG)
added to each baseline. The higher scores are            further investigate to what extent each kind of in-
boldfaced. Rows in gray indicate ablation studies.       formation is useful to each task by conducting
* is marked when improvements are made compared          group-wise comparisons. To this end, we group
to L ANG R ANK +P RAG, MTV EC +P RAG respectively.       the features into five categories: Pretrain-specific,
                                                         Data-specific, Typology, Geography, Orthography,
help the performance of MTV EC while they deteri-        and Pragmatic. Pretrain-specific features cover fac-
orate the performance of L ANG R ANK. Although           tors that may be related to the performance of pre-
some performance increase was observed when ap-          trained language models used in our task-specific
plied to MTV EC, the performance of MTV EC in            cross-lingual models. Specifically, we used the size
DEP remains extremely poor. These conflicting            of the Wikipedia training corpus of each language
trends suggest that pragmatic information is not         used in training mBERT.9 Note that we do not mea-
crucial to less pragmatically-driven tasks, repre-       sure this feature group’s performance on DEP as no
sented as dependency parsing in our case.                pretrained language model was used in DEP. Data-
   The low performance of MTV EC in DEP is no-           specific features include tf size, tg size, and
ticeable as MTV EC is generally believed to con-         ratio size. Typological features include geo,
tain a significant amount of syntactic information,      syn, feat, phon, and inv distances. Geography
with much higher dimensionality than L ANG R ANK.        includes geo distance in isolation. Orthographic
It also suggests the limitation of using distribu-       feature is the word overlap between languages.
tional representations as language features; their       Finally, the Pragmatic group consists of ttr and
lack of interpretability makes it difficult to control   the three proposed features, LCR, LTQ, and ESD.
the kinds of information used in a model.                ttr is included in Pragmatic as Richards (1987)
   We additionally conduct ablation studies by re-       have suggested that it encodes a significant amount
moving each feature from the +P RAG models to ex-        of cultural information.
amine each feature’s contribution. The SA results           Table 2 reports the performance of ranking mod-
show that LCR and LTQ significantly contribute           els trained with the respective feature category.
to overall improvements achieved by adding our           Interestingly, the two tasks showed significantly
features, while ESD turns out to be less helpful.        different results; the Pragmatic group showed the
Sometimes, removing ESD resulted in a better per-        best performance in SA while the Typology group
formance. In contrast, the results of DEP show that      outperformed all other groups in DEP. This again
ESD consistently made a significant contribution,        confirms that the features indicating cross-lingual
and LCR and LTQ were not useful. The results             transferability differ depending on the target task.
imply that the emotion semantics information of          Although the Pretrain-specific features were more
languages is surprisingly not useful in sentiment        predictive than the Geography and Orthography
analysis, but more so in dependency parsing.             features it was not as helpful as the Pragmatic fea-
                                                         tures.
5.3   Group-wise Contribution
The previous experiment suggests that the same
pragmatic information can be helpful to different          9
                                                             https://meta.wikimedia.org/wiki/List_
extents depending on the downstream task. We             of_Wikipedias

                                                      2409
5.4    Controlling for Dataset Size                           ilarity by comparing the nearest neighborhood of
The performance of cross-lingual transfer depends             words in different languages, showing that words in
not only on the cultural similarity between trans-            some domains (e.g., time, quantity) exhibit higher
fer and target languages but also on other factors,           cross-lingual alignment than other domains (e.g.,
including dataset size and label distributions. Al-           politics, food, emotions). Jackson et al. (2019) rep-
though our model already accounts for the dataset             resented each language as a network of emotion
size to some extent by including tf size as input,            concepts derived from their colexification patterns
we conduct a more rigorous experiment to better               and measured the similarity between networks.
understand the importance of cultural similarity in           Auxiliary Language Selection in Cross-lingual
language selection. Specifically, we control the              tasks There has been active work on leverag-
data size by down-sampling all SA data to match               ing multiple languages to improve cross-lingual
both the size and label distribution of the second            systems (Neubig and Hu, 2018; Ammar et al.,
smallest Turkish dataset.10 We then trained two               2016). Adapting auxiliary language datasets to
ranking models equipped with different sets of fea-           the target language task can be practiced through
tures: L ANG R ANK and L ANG R ANK +P RAG.                    either language-selection or data-selection. Previ-
   In terms of languages, we focus on a setting               ous work on language-selection mostly relied on
where Turkish is the target and Arabic, Japanese              leveraging syntactic or semantic resemblance be-
and Korean are the transfer languages. This is a              tween languages (e.g. ngram overlap) to choose the
particularly interesting set of languages because the         best transfer languages (Zoph et al., 2016; Wang
source languages are similar/dissimilar to Turkish            and Neubig, 2019). Our approach extends this line
in different aspects; Korean and Japanese are typo-           of work by leveraging cross-cultural pragmatics,
logically similar to Turkish, yet in cultural terms,          an aspect that has been unexplored by prior work.
Arabic is more similar to Turkish.
   In this controlled setting, the ground-truth rank-         7   Future Directions
ing reveals that the optimal transfer language
among the three is Arabic, followed by Korean and             Typology of Cross-cultural Pragmatics The
Japanese. It indicates the important role of cultural         features proposed here provide three dimensions in
resemblance in sentiment analysis which encapsu-              a provisional quantitative cross-linguistic typology
lates the rich historical relationship shared between         of pragmatics in language. Having been validated,
Arabic- and Turkish-speaking communities. L AN -              both intrinsically and extrinsically, they can be used
G R ANK +P RAG chose Arabic as the best transfer              in studies as a stand-in for cross-cultural similarity.
language, suggesting that the imposed cultural sim-           They also open a new avenue of research, raising
ilarity information from the features helped the              questions about what other quantitative features of
ranking model learn the cultural tie between the              language are correlates of cultural and pragmatic
two languages. On the other hand, L ANG R ANK                 difference.
ranked Japanese the highest over Arabic, possibly
                                                              Model Probing Fine-tuning pretrained models
because the provided features mainly focus on ty-
                                                              to downstream tasks has become the de facto stan-
pological similarity over cultural similarity.
                                                              dard in various NLP tasks, and their success has
6     Related Work                                            promoted the development of their multilingual ex-
                                                              tensions (Devlin et al., 2019; Lample and Conneau,
Quantifying Cross-cultural Similarity A few                   2019). While the performance gains from these
recent work in psycholinguistics and NLP have                 models are undeniable, their learning dynamics re-
aimed to measure cultural differences, mainly from            main obscure. This issue has prompted various
word-level semantics. Lin et al. (2018) suggested             probing methods designed to test what kind of lin-
a cross-lingual word alignment method that pre-               guistic information the models retain, including
serves the cultural, social context of words. They            syntactic and semantic knowledge (Conneau et al.,
derive cross-cultural similarity from the embed-              2018; Liu et al., 2019; Ravishankar et al., 2019;
dings of a bilingual lexicon in the shared represen-          Tenney et al., 2019). Similarly, our features can
tation space. Thompson et al. (2018) computed sim-            be employed as a touchstone to evaluate a model’s
  10
     The size of the smallest language (Tamil; 417 samples)   knowledge in cross-cultural pragmatics. Investi-
was too small to train an effective model.                    gating how different pretraining tasks affect the

                                                          2410
learning of pragmatic knowledge will also be an           Alexis Conneau, German Kruszewski, Guillaume Lam-
interesting direction of research.                          ple, Loı̈c Barrault, and Marco Baroni. 2018. What
                                                            you can cram into a single $&!#* vector: Probing
8   Conclusion                                              sentence embeddings for linguistic properties. In
                                                            Proceedings of the 56th Annual Meeting of the As-
In this work, we propose three pragmatically-               sociation for Computational Linguistics (Volume 1:
                                                            Long Papers), pages 2126–2136, Melbourne, Aus-
inspired features that capture cross-cultural sim-          tralia. Association for Computational Linguistics.
ilarities that arise as linguistic patterns: language
context-level ratio, literal translation quality, and     Béatrice Daille. 1994.           Approche mixte pour
                                                             l’extraction automatique de terminologie: statis-
emotion semantic distance. Through feature analy-
                                                             tiques lexicales et filtres linguistiques. Ph.D. thesis,
ses, we examine whether our features can operate             Ph. D. thesis, Université Paris 7.
as valid proxies of cross-cultural similarity. From a
practical standpoint, the experimental results show       Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
                                                             Kristina Toutanova. 2019. BERT: Pre-training of
that our features can help select the best transfer          deep bidirectional transformers for language under-
language for cross-lingual transfer in pragmatically-        standing. In Proceedings of the 2019 Conference
driven tasks, such as sentiment analysis.                    of the North American Chapter of the Association
                                                             for Computational Linguistics: Human Language
Acknowledgements                                            Technologies, Volume 1 (Long and Short Papers),
                                                             pages 4171–4186, Minneapolis, Minnesota. Associ-
The authors are grateful to the anonymous review-            ation for Computational Linguistics.
ers for their invaluable feedback. This material is       Timothy Dozat and Christopher D. Manning. 2016.
based upon work supported by the National Sci-              Deep biaffine attention for neural dependency pars-
ence Foundation under Grant No. IIS2007960. We              ing. CoRR, abs/1611.01734.
would also like to thank Amazon for providing             Aparna Garimella, Rada Mihalcea, and James Pen-
AWS credits.                                                nebaker. 2016. Identifying cross-cultural differ-
                                                            ences in word usage. In Proceedings of COLING
                                                            2016, the 26th International Conference on Compu-
References                                                  tational Linguistics: Technical Papers, pages 674–
                                                            683, Osaka, Japan. The COLING 2016 Organizing
Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma,                 Committee.
  Eduard H. Hovy, Kai-Wei Chang, and Nanyun
  Peng. 2018.      Near or far, wide range zero-          Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff.
  shot cross-lingual dependency parsing.  CoRR,             2012. Building large monolingual dictionaries at the
  abs/1811.00570.                                           Leipzig Corpora Collection: From 100 to 200 lan-
                                                            guages. In Proceedings of the Eighth International
Waleed Ammar, George Mulcaire, Miguel Ballesteros,          Conference on Language Resources and Evaluation
  Chris Dyer, and Noah A. Smith. 2016. Many lan-            (LREC’12), pages 759–765, Istanbul, Turkey. Euro-
  guages, one parser. Transactions of the Association       pean Language Resources Association (ELRA).
  for Computational Linguistics, 4:431–444.
                                                          Edward Twitchell Hall. 1989. Beyond culture. Anchor.
Johannes Bjerva, Robert Östling, Maria Han Veiga,
  Jörg Tiedemann, and Isabelle Augenstein. 2019.         Geert H Hofstede, Gert Jan Hofstede, and Michael
  What do language representations really represent?        Minkov. 2005. Cultures and organizations: Soft-
  Computational Linguistics, 45(2):381–389.                 ware of the mind, volume 2. Mcgraw-hill New York.
Christopher J. Burges, Robert Ragno, and Quoc V. Le.      Joshua Conrad Jackson, Joseph Watts, Teague R.
  2007. Learning to rank with nonsmooth cost func-           Henry, Johann-Mattis List, Robert Forkel, Peter J.
  tions. In B. Schölkopf, J. C. Platt, and T. Hoffman,      Mucha, Simon J. Greenhill, Russell D. Gray, and
  editors, Advances in Neural Information Processing         Kristen A. Lindquist. 2019. Emotion semantics
  Systems 19, pages 193–200. MIT Press.                      show both cultural variation and universal structure.
                                                            Science, 366(6472):1517–1522.
Rafael Monroy Casas and JM Hernández Campoy.
  1995. A sociolinguistic approach to the study of        Kalervo Järvelin and Jaana Kekäläinen. 2002. Cu-
  idioms: Some anthropolinguistic sketches. Cuader-         mulated gain-based evaluation of IR techniques.
  nos de Filologı́a inglesa, 4.                             ACM Transactions on Information Systems (TOIS),
                                                            20(4):422–446.
Yo Joong Choe, Kyubyong Park, and Dongwoo Kim.
  2020. word2word: A collection of bilingual lexi-        Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim
  cons for 3,564 language pairs. In Proceedings of         Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat,
  the 12th International Conference on Language Re-        Fernanda Viégas, Martin Wattenberg, Greg Corrado,
  sources and Evaluation (LREC 2020).                      Macduff Hughes, and Jeffrey Dean. 2017. Google’s

                                                     2411
multilingual neural machine translation system: En-     Patrick Littell, David R. Mortensen, Ke Lin, Kather-
  abling zero-shot translation. Transactions of the As-     ine Kairis, Carlisle Turner, and Lori Levin. 2017.
  sociation for Computational Linguistics, 5:339–351.       URIEL and lang2vec: Representing languages as
                                                            typological, geographical, and phylogenetic vectors.
Jihen Karoui, Farah Benamara, Véronique Moriceau,          In Proceedings of the 15th Conference of the Euro-
   Viviana Patti, Cristina Bosco, and Nathalie              pean Chapter of the Association for Computational
   Aussenac-Gilles. 2017. Exploring the impact of           Linguistics: Volume 2, Short Papers, pages 8–14,
   pragmatic phenomena on irony detection in tweets:        Valencia, Spain. Association for Computational Lin-
   A multilingual corpus study. In Proceedings of the       guistics.
   15th Conference of the European Chapter of the
   Association for Computational Linguistics: Volume      Nelson F. Liu, Matt Gardner, Yonatan Belinkov,
   1, Long Papers, pages 262–272, Valencia, Spain.          Matthew E. Peters, and Noah A. Smith. 2019. Lin-
   Association for Computational Linguistics.               guistic knowledge and transferability of contextual
                                                            representations. In Proceedings of the 2019 Confer-
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang,            ence of the North American Chapter of the Associ-
  Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan               ation for Computational Linguistics: Human Lan-
  Liu. 2017. Lightgbm: A highly efficient gradient          guage Technologies, Volume 1 (Long and Short Pa-
  boosting decision tree. In I. Guyon, U. V. Luxburg,       pers), pages 1073–1094, Minneapolis, Minnesota.
  S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan,        Association for Computational Linguistics.
  and R. Garnett, editors, Advances in Neural Informa-
  tion Processing Systems 30, pages 3146–3154. Cur-       Chaitanya Malaviya, Graham Neubig, and Patrick Lit-
  ran Associates, Inc.                                      tell. 2017. Learning language representations for
                                                            typology prediction. In Proceedings of the 2017
Zoltán Kövecses. 2003. Language, figurative thought,      Conference on Empirical Methods in Natural Lan-
  and cross-cultural comparison. Metaphor and Sym-          guage Processing, pages 2529–2535, Copenhagen,
  bol, 18(4):311–320.                                       Denmark. Association for Computational Linguis-
                                                            tics.
Zoltán Kövecses. 2010. Metaphor: A practical intro-
  duction. Oxford University Press.                       Korac-Kakabadse Nada, Kouzmin Alexander, Korac-
                                                            Kakabadse Andrew, and Savery Lawson. 2001.
Guillaume Lample and Alexis Conneau. 2019. Cross-           Low-and high-context communication patterns: to-
  lingual language model pretraining. Advances in           wards mapping cross-cultural encounters. Cross
  Neural Information Processing Systems (NeurIPS).          Cultural Management: An International Journal,
                                                            8(2):3–24.
Guillaume Lample, Alexis Conneau, Marc’Aurelio
  Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018.      Graham Neubig and Junjie Hu. 2018. Rapid adapta-
  Word translation without parallel data. In Interna-       tion of neural machine translation to new languages.
  tional Conference on Learning Representations.            In Proceedings of the 2018 Conference on Empiri-
                                                            cal Methods in Natural Language Processing, pages
Xin Li, Lidong Bing, Wenxuan Zhang, and Wai Lam.            875–880, Brussels, Belgium. Association for Com-
  2019. Exploiting BERT for end-to-end aspect-based         putational Linguistics.
  sentiment analysis. In Proceedings of the 5th Work-
  shop on Noisy User-generated Text (W-NUT 2019),         Dat Quoc Nguyen, Dai Quoc Nguyen, Dang Duc Pham,
  pages 34–41, Hong Kong, China. Association for            and Son Bao Pham. 2014. RDRPOSTagger: A rip-
  Computational Linguistics.                                ple down rules-based part-of-speech tagger. In Pro-
                                                            ceedings of the Demonstrations at the 14th Confer-
Bill Yuchen Lin, Frank F. Xu, Kenny Zhu, and Seung-         ence of the European Chapter of the Association for
  won Hwang. 2018. Mining cross-cultural differ-            Computational Linguistics, pages 17–20, Gothen-
   ences and similarities in social media. In Proceed-      burg, Sweden. Association for Computational Lin-
   ings of the 56th Annual Meeting of the Association       guistics.
   for Computational Linguistics (Volume 1: Long Pa-
   pers), pages 709–719, Melbourne, Australia. Asso-      Joakim Nivre, Mitchell Abrams, Željko Agić,
   ciation for Computational Linguistics.                   and et al. 2018.      Universal dependencies 2.2.
                                                            LINDAT/CLARIAH-CZ digital library at the In-
Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li,           stitute of Formal and Applied Linguistics (ÚFAL),
  Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani,               Faculty of Mathematics and Physics, Charles
  Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios            University.
  Anastasopoulos, Patrick Littell, and Graham Neubig.
  2019. Choosing transfer languages for cross-lingual     Ulrike Oster. 2019. Cross-cultural semantic and prag-
  learning. In Proceedings of the 57th Annual Meet-         matlc profiling of emotion words. regulation and ex-
  ing of the Association for Computational Linguis-         pression of anger in Spanish and German. Cur-
  tics, pages 3125–3135, Florence, Italy. Association       rent Approaches to Metaphor Analysis in Discourse,
  for Computational Linguistics.                            39:35.

                                                     2412
Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Pad-         In Coling 2010: Posters, pages 1256–1264, Beijing,
  manabhan, and Graham Neubig. 2018. When and                China. Coling 2010 Organizing Committee.
  why are pre-trained word embeddings useful for neu-
  ral machine translation? In Proceedings of the 2018      Jelena Vulanović. 2014. Cultural markedness and
  Conference of the North American Chapter of the             strategies for translating idiomatic expressions in
  Association for Computational Linguistics: Human            the epic poem “The Mountain Wreath” into En-
  Language Technologies, Volume 2 (Short Papers),             glish. Mediterranean Journal of Social Sciences,
  pages 529–535, New Orleans, Louisiana. Associa-             5(13):210.
  tion for Computational Linguistics.
                                                           Xinyi Wang and Graham Neubig. 2019. Target condi-
Vinit Ravishankar, Memduh Gökırmak, Lilja Øvrelid,          tioned sampling: Optimizing data selection for mul-
  and Erik Velldal. 2019. Multilingual probing of            tilingual neural machine translation. In Proceedings
  deep pre-trained contextual encoders. In Proceed-          of the 57th Annual Meeting of the Association for
  ings of the First NLPL Workshop on Deep Learn-             Computational Linguistics, pages 5823–5828, Flo-
  ing for Natural Language Processing, pages 37–             rence, Italy. Association for Computational Linguis-
  47, Turku, Finland. Linköping University Electronic       tics.
  Press.
                                                           Shijie Wu and Mark Dredze. 2019. Beto, bentz, be-
Brian Richards. 1987. Type/token ratios: What do they        cas: The surprising cross-lingual effectiveness of
  really tell us? Journal of child language, 14(2):201–      BERT. In Proceedings of the 2019 Conference on
  209.                                                       Empirical Methods in Natural Language Processing
                                                             and the 9th International Joint Conference on Natu-
Tal Schuster, Ori Ram, Regina Barzilay, and Amir             ral Language Processing (EMNLP-IJCNLP), pages
  Globerson. 2019. Cross-lingual alignment of con-           833–844, Hong Kong, China. Association for Com-
  textual word embeddings, with applications to zero-        putational Linguistics.
  shot dependency parsing. In Proceedings of the
  2019 Conference of the North American Chapter of         Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019. BERT
  the Association for Computational Linguistics: Hu-         post-training for review reading comprehension and
  man Language Technologies, Volume 1 (Long and              aspect-based sentiment analysis. In Proceedings of
  Short Papers), pages 1599–1613, Minneapolis, Min-          the 2019 Conference of the North American Chap-
  nesota. Association for Computational Linguistics.         ter of the Association for Computational Linguistics:
                                                             Human Language Technologies, Volume 1 (Long
Bernard J. Siegel. 1977. Encyclopedia of anthropology.       and Short Papers), pages 2324–2335, Minneapolis,
  David E. Hunter and Phillip Whitten, eds. New York.        Minnesota. Association for Computational Linguis-
  American Anthropologist, 79(2):452–454.                    tics.

Laura Smith, Salvatore Giorgi, Rishi Solanki, Johannes     Barret Zoph, Deniz Yuret, Jonathan May, and Kevin
  Eichstaedt, H Andrew Schwartz, Muhammad Abdul-             Knight. 2016. Transfer learning for low-resource
  Mageed, Anneke Buffone, and Lyle Ungar. 2016.              neural machine translation. In Proceedings of the
  Does ‘well-being’translate on twitter? In Proceed-         2016 Conference on Empirical Methods in Natu-
  ings of the 2016 Conference on Empirical Methods           ral Language Processing, pages 1568–1575, Austin,
  in Natural Language Processing, pages 2042–2047.           Texas. Association for Computational Linguistics.

Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang.
  2019. How to fine-tune bert for text classification?
  In Chinese Computational Linguistics, pages 194–
  206, Cham. Springer International Publishing.

Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang,
   Adam Poliak, R. Thomas McCoy, Najoung Kim,
   Benjamin Van Durme, Samuel R. Bowman, Dipan-
   jan Das, and Ellie Pavlick. 2019. What do you
   learn from context? probing for sentence structure
   in contextualized word representations. In 7th Inter-
   national Conference on Learning Representations,
   ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
   OpenReview.net.

B. Thompson, S. Roberts, and G. Lupyan. 2018. Quan-
   tifying semantic similarity across languages. Pro-
   ceedings of the 40th Annual Conference of the Cog-
   nitive Science Society (CogSci 2018).

Yulia Tsvetkov and Shuly Wintner. 2010. Extraction of
  multi-word expressions from small parallel corpora.

                                                      2413
A    Dataset for Sentiment Analysis

 Dataset                                      Languages     Domain        Size    POS/NEG
                                                Chinese     electronics   2333      1.53
                                                Arabic         hotel      4111      1.54
                                                English     restaurant    1472      2.14
SemEval-2016 Aspect Based
                                                 Dutch      restaurant    1089      1.43
Sentiment Analysis
                                                Spanish     restaurant    1396      2.82
                                                Russian     restaurant    2289      3.81
                                                Turkish     restaurant     907      1.32
SentiPers                                       Persian      product      3904      1.8
                                                 French      product      20771      8.0
Amazon Customer Reviews                         German       product      56333     6.56
                                                Japanese     product      21095     8.05
CSFD CZ                                          Czech        movie       54540     1.04
Naver Sentiment Movie Corpus                    Korean        movie       18000     1.0
Tamil Movie Review Dataset                       Tamil        movie        417      0.48
PolEval 2017                                     Polish      product      26284     1.38
Aspect based Sentiment Analysis                  Hindi       product      2707      3.22

Table 3: Datasets for sentiment analysis.

B    Task-Specific Models Details
SA Cross-lingual Model We performed super-
vised fine-tuning of multilingual BERT (mBERT)
(Devlin et al., 2019) for the sentiment analysis
task, as the model showed strong results in vari-
ous text classification tasks in cross-lingual settings
(Sun et al., 2019; Xu et al., 2019; Li et al., 2019).
mBERT is pretrained with 104 different languages,
including the 16 languages we used throughout our
experiment. We used a concatenation of mean and
max pooled representations from mBERT’s penulti-
mate layer, as it outperformed the standard practice
of using the last layer’s [CLS] token. The repre-
sentation was passed to a fully connected layer for
prediction. To extract optimal transfer rankings,
we conducted zero-shot transfer with mBERT: fine-
tuned mBERT on transfer language data and tested
it on target language data.
Ranking Model We used LightGBM (Ke et al.,
2017) with LambdaRank (Burges et al., 2007) al-
gorithm. The model consists of 100 decision trees
with 16 leaves each, and it was trained with the
learning rate of 0.1. We optimized NDCG to train
the model (Järvelin and Kekäläinen, 2002).

                                                     2414
You can also read