ZusammenQA: Data Augmentation with Specialized Models for Cross-lingual Open-retrieval Question Answering System

Page created by Cody Erickson
 
CONTINUE READING
ZusammenQA: Data Augmentation with Specialized Models for
                                                  Cross-lingual Open-retrieval Question Answering System
                                                      Chia-Chien Hung1 , Tommaso Green1 , Robert Litschko1 ,
                                                      Tornike Tsereteli1 , Sotaro Takeshita1 , Marco Bombieri2 ,
                                                              Goran Glavaš3 and Simone Paolo Ponzetto1
                                                   1
                                                     Data and Web Science Group, University of Mannheim, Germany
                                                           2
                                                             ALTAIR Robotics Lab, University of Verona, Italy
                                                              3
                                                                CAIDAS, University of Würzburg, Germany
                                                {chia-chien.hung, tommaso.green, robert.litschko,
                                          tornike.tsereteli, sotaro.takeshita, ponzetto}@uni-mannheim.de
                                            marco.bombieri_01@univr.it, goran.glavas@uni-wuerzburg.de
arXiv:2205.14981v1 [cs.CL] 30 May 2022

                                                               Abstract                          tions, has arguably been one of the most challeng-
                                                                                                 ing natural language processing (NLP) applications
                                             This paper introduces our proposed system           in recent years (e.g., Lewis et al., 2020; Karpukhin
                                             for the MIA Shared Task on Cross-lingual
                                                                                                 et al., 2020; Izacard and Grave, 2021). As is the
                                             Open-retrieval Question Answering (COQA).
                                             In this challenging scenario, given an input        case with the vast majority of NLP tasks, much of
                                             question the system has to gather evidence doc-     the OQA focused on English, relying on a pipeline
                                             uments from a multilingual pool and gener-          that crucially depends on a neural passage retriever,
                                             ate from them an answer in the language of          i.e., a (re-)ranking model – trained on large-scale
                                             the question. We devised several approaches         English QA datasets – to find evidence passages
                                             combining different model variants for three        in English (Lewis et al., 2020) for answer gener-
                                             main components: Data Augmentation, Pas-
                                                                                                 ation. Unlike in many other retrieval-based tasks,
                                             sage Retrieval, and Answer Generation. For
                                             passage retrieval, we evaluated the monolin-
                                                                                                 such as ad-hoc document retrieval (Craswell et al.,
                                             gual BM25 ranker against the ensemble of re-        2021), parallel sentence mining (Zweigenbaum
                                             rankers based on multilingual pretrained lan-       et al., 2018), or Entity Linking (Wu et al., 2020),
                                             guage models (PLMs) and also variants of            the progress toward Cross-lingual Open-retrieval
                                             the shared task baseline, re-training it from       Question Answering (COQA) has been hindered
                                             scratch using a recently introduced contrastive     by the lack of efficient integration and consolida-
                                             loss that maintains a strong gradient signal        tion of knowledge expressed in different languages
                                             throughout training by means of mixed nega-
                                                                                                 (Loginova et al., 2021). COQA is especially rel-
                                             tive samples. For answer generation, we fo-
                                             cused on language- and domain-specialization        evant for opinionated information, such as news,
                                             by means of continued language model (LM)           blogs, and social media. In the era of fake news and
                                             pretraining of existing multilingual encoders.      deliberate misinformation, training on only (or pre-
                                             Additionally, for both passage retrieval and        dominantly) English texts is more likely to lead to
                                             answer generation, we augmented the train-          more biased and less reliable NLP models. Further,
                                             ing data provided by the task organizers with       an Anglo- and Indo-European-centric NLP (Joshi
                                             automatically generated question-answer pairs
                                                                                                 et al., 2020) is unrepresentative of the needs of
                                             created from Wikipedia passages to mitigate
                                             the issue of data scarcity, particularly for the
                                                                                                 the majority of the world’s population (e.g., Man-
                                             low-resource languages for which no training        darin and Spanish have more native speakers than
                                             data were provided. Our results show that           English, and Hindi and Arabic come close) and
                                             language- and domain-specialization as well         contributes to the widening of the digital language
                                             as data augmentation help, especially for low-      divide.1 Developing solutions for cross-lingual
                                             resource languages.                                 open QA (COQA) thus contributes towards the
                                                                                                 goal of global equity of information access.
                                         1   Introduction                                           COQA is the task of automatic question answer-
                                         Open-retrieval Question Answering (OQA), where          ing, where the answer is to be found in a large
                                         the agent helps users to retrieve answers from large-     1
                                                                                                     http://labs.theguardian.com/
                                         scale document collections with given open ques-        digital-language-divide/
Multilingual Passage Retriever                                Multilingual Answer Generator

                                                                                   Training
                            Supervised Models

                     mDPR             mDPR with MixCSE

                                                                                         Models

                           Unsupervised Models                                          mGEN (mT5)
     Data                                                               Data                                   Evaluation
                                                                                           MLM
                 Ensemble
                                                                                    Wiki(14) + CCNet(2)
                  DISTIL      LaBSE       MiniLM     MPNet

                           Oracle Monolingual BM25

Figure 1: The proposed pipeline for the cross-lingual QA problem. The pipeline is composed of two stages: (i)
the retrieval of documents containing a possible answer (red box) and the generation of an answer (blue box). For
the retrieval part, we exploited different methods, based both on training mDPR variants and on the ensembling of
blackbox models. For training the mDPR variants, we enlarge the original training dataset with samples from our
data augmentation pipeline. For the generation part, we enrich the existing baseline method with data augmentation
and masked language modeling.

multilingual document collection. It is a challeng-          the augmented data for the COQA system. The
ing NLP task, with questions written in a user’s             overall process is illustrated in Figure 1.
preferred language, where the system needs to find
evidence in a large-scale document collection writ-          2     Data Augmentation
ten in different languages; the answer then needs to         We use a language model to generate question-
be returned to the user in their preferred language          answer (QA) pairs from English texts, which we
(i.e., the language of the question).                        then filter according to a number of heuristics and
   More formally, the goal of a COQA system is               translate into the other 15 languages.2 An example
to find an answer a to the query q in the collec-            can be seen in Figure 2 in the Appendix.
tion of documents {D}N    i . In the cross-lingual set-
                                                             2.1    Question-Answer Generation
ting, q and {D}N i  are, in  general, in different lan-
guages. For example, the users can ask a question            For generating the question-answer pairs, we use
in Japanese, and the system can search an English            the provided Wikipedia passages as the input to a
document for an answer that then needs to be re-             language model, which then generates questions
turned to the user in Japanese.                              and answers based on the input text. We based
                                                             our choice of the model on the findings by Dong
   In this work, we propose data augmentation for            et al. (2019) and Bao et al. (2020), who showed
specialized models that correspond to the two main           that language models that are fine-tuned jointly
components of a standard COQA system – pas-                  on Question Answering and Question Generation,
sage retrieval and answer generation: (1) we first           outperform individual models fine-tuned indepen-
extract the passages from all documents of all lan-          dently on those tasks. More specifically, we use
guages, exploiting both unsupervised and super-              the model by Dugan et al. (2022) and make slight
vised (e.g., mDPR variants) passage retrieval meth-          modifications. Dugan et al. (2022) used a T5 model
ods; (2) we then use the retrieved passages from             fine-tuned on SQuAD and further fine-tuned it on
the previous step and further conduct intermedi-             three tasks simultaneously: Question Generation
ate training of a pretrained language model (e.g.,           (GQ), Question Answering (QA), and Answer Ex-
mT5 (Xue et al., 2021)) on the extracted augmented           traction (AE). They also included a summarization
data in order to inject language-specific knowledge
                                                                2
into the model, and then generate the answers for                 Arabic(AR), Bengali(BN), Finish(FI), Japanese(JA), Ko-
                                                             rean(KO), Russian(RU), Telugu(TE), Spanish(ES), Khmer(KM),
each question from different language portions. As           Malay(MS), Swedish(SV), Turkish(TR), Chinese(ZH - CN),
a result, we obtain a specialized model trained on           Tagalog(TL) and Tamil(TA).
module to create lexically diverse question-answer          the first category is a supervised approach, which
pairs. We found that using this module sometimes            uses QA datasets to inject task knowledge into pre-
leads to factually incorrect passages, and leave this       trained models, the others use general linguistic
to future work. Similar to Dugan et al. (2022), we          knowledge for retrieving (i.e., unsupervised).
split the original passages into whole sentences that
                                                            Baseline: mDPR We take as a baseline the
are shorter than 512 tokens.3 We then generate the
                                                            method proposed in Asai et al. (2021b). They
pairs using the first three sub-passages.
                                                            propose mDPR (Multilingual Dense Passage Re-
2.2    Filtering                                            triever), a model that extends the Dense Passage
                                                            Retriever (DPR) (Qu et al., 2021) to a multilin-
Before translating question-answer pairs, to ensure
                                                            gual setting. It is made of two mBERT-based en-
better translations, we enforce each pair to satisfy
                                                            coders (Devlin et al., 2019), one for the question
at least one of a number of heuristics, which we
                                                            and one for the passages. The training approach
determined through manual evaluation of the gen-
                                                            proceeds over two subsequent stages: (i) parameter
erated pairs. Each pair is evaluated on whether one
                                                            updates and (ii) cross-lingual data mining.
of the following is true (in the respective order):
                                                               In the first phase, both mDPR and mGEN
the answer is a number, the question starts with the
                                                            (§3.2) are trained one after the other. For
word who, the question starts with the words how
                                                            mDPR, the model processes a dataset D =
many, or the answer contains a number or a date.                          −      −              −
                                                            {(qi , p+                                 m
                                                                     i , pi,1 , pi,2 , . . . , pi,n )}i=1 made of tuples
After filtering, we are left with roughly 339,000
question-answer pairs.                                      containing a question qi , the passage p+         i contain-
                                                            ing an answer (called positive or gold), and a set
2.3    Translation                                          {p−     n
                                                               i,j }j=1 of negative passages. For every question,
We use the Google Translate API provided by trans-          negatives are made up of the positive passages from
latepy4 to translate the filtered question-answer           the other questions or passages either extracted at
pairs from English into the relevant 15 languages.          random or produced by the subsequent data min-
Each language has an equal number of question-              ing phase. To do this, they use a contrastive loss
answer pairs. In total, we generate about 5.4 mil-          (Lmdpr ) that moves the embedding of the question
lion pairs for all languages combined.                      close to its positive passage, while at the same time
                                                            repelling the representations of negative passages:
3     Methodology
Following the approach described in Asai et al.                                              heqi , ep+ i
                                                              Lmdpr = − log                     P i
(2021b), we consider the COQA problem as two                                     heqi , ep+ i + nj=1 heqi , ep− i
                                                                                         i                      i,j
sub-components: the retrieval of documents con-
taining a possible answer and the generation of                In the second stage, the training set is expanded
an answer. Figure 1 summarizes the proposed                 by finding new positive and negative passages us-
methods. This section is organized as follows: we           ing Wikipedia language links and mGEN (§3.2)
present the proposed retrieval methods in §3.1 and          to automatically label passages. This two-staged
demonstrate the language-specialized methods for            training pipeline is repeated T times.
answer generation in §3.2.
                                                            mDPR variants One of our approaches is to sim-
3.1    Passage Retrieval                                    ply substitute the loss function presented above
For the passage retrieval phase, we explored the ap-        with a contrastive loss described in Wang et al.
proaches described in the following sections, which         (2022), named MixCSE. In this work, the authors
fall into three main categories: the enhancement of         tackle a common problem of out-of-the-box BERT
the training procedure of the mDPR baseline, the            sentence embeddings, called anisotropy (Li et al.,
ensembling of blackbox models (i.e. retrieval using         2020), which makes all the sentence representa-
multilingual sentence encoders trained for semantic         tions to be distributed in a narrow cone. Contrastive
similarity) and lexical retrieval using BM25. While         learning has already proven effective in alleviating
                                                            this issue by distributing embeddings in a larger
   3
     We use a different sentence splitting method, namely   space (Gao et al., 2021). Wang et al. (2022) prove
pySBD.
   4
     https://github.com/Animenosekai/                       that hard negatives, i.e. data points hard to distin-
translate                                                   guish from the selected anchor, are key for keeping
a strong gradient signal; however, as learning pro-           – DISTILdmbert : distills the knowledge
ceeds, they become orthogonal to the anchor and                 from the Sentence-BERT (Reimers and
make the gradient signal close to zero. For this                Gurevych, 2019) teacher into a multilin-
reason, the key idea of MixCSE is to continually                gual version of DistilBERT (Sanh et al.,
generate hard negatives via mixing positive and                 2019), a 6-layer transformer pre-distilled
negative examples, which maintains a strong gradi-              from mBERT.
ent signal throughout training. Adapting this con-
cept to our retrieval scenario, we construct mixed        • LaBSE (Language-agnostic BERT Sentence
negative passages as follows:                               Embeddings Feng et al. (2020)) is a neural
                                                            dual-encoder framework, trained with parallel
                  λ ep+ + (1 − λ) ep−                       data. LaBSE training starts from a pretrained
                      i              i,j
         ẽi =                             ,
                 kλ ep+ + (1 − λ) ep− k2                    mBERT instance. LaBSE additionally uses
                      i             i,j
                                                            standard self-supervised objectives used in the
   where p−
          i,j  is a negative passage chosen at ran-         pretraining of mBERT and XLM (Conneau
dom. We provide the equation of the loss in Ap-             and Lample, 2019): masked and translation
pendix B. The main difference with Lmdpr is the             language modelling (MLM and TLM).
addition of a mixed negative in the denominator
and the similarity used (exponential of cosine simi-      • MiniLM (Wang et al., 2020) is a student
larity instead of dot product).                             model trained by deeply mimicking the self-
   We train mDPR with the original loss and with            attention behavior of the last Transformer
the MixCSE loss on the concatenation of the pro-            layer of the teacher, which allows a flexible
vided training set for mDPR and the augmented               number of layers for the students and allevi-
data obtained via the methods described in §2. We           ates the effort of finding the best layer map-
refer to these two variants as mDPR(AUG) and                ping.
mDPR(AUG) with MixCSE, respectively.                      • MPNet (Song et al., 2020) is based on a pre-
Ensembling “blackbox” models Following the                  training method that leverages the dependency
approaches presented in Litschko et al. (2022), we          among the predicted tokens through permuted
also ensemble the ranking of some blackbox mod-             language modeling and makes the model see
els that directly produce a semantic embedding of           auxiliary position information to reduce the
the input text. We provide a brief overview of the          discrepancy between pre-training and fine-
models included in our ensemble below.                      tuning.
   • DISTIL (Reimers and Gurevych, 2020) is a             We produce an ensembling of the blackbox mod-
     teacher-student framework for injecting the       els by simply taking an average of the ranks for
     knowledge obtained through specialization         each of the documents retrieved, which is denoted
     for semantic similarity from a specialized        as EnsembleRank.
     monolingual transformer (e.g., BERT) into a
                                                       Oracle Monolingual BM25 (Sparck Jones et al.,
     non-specialized multilingual transformer (e.g.,
                                                       2000; Jonesa et al., 2000) This approach is made
     mBERT). For semantic similarity, it first spe-
                                                       of two phases: first, we automatically detect the
     cializes a monolingual (English) teacher en-
                                                       language of the question, then we query the index
     coder using the available semantic sentence-
                                                       in the detected language. As a weighting scheme
     matching datasets for supervision. In the
                                                       in the vector space model, we choose BM25. It is
     second knowledge distillation step, a pre-
                                                       based on a probabilistic interpretation of how terms
     trained multilingual student encoder is trained
                                                       contribute to the document’s relevance. It uses
     to mimic the output of the teacher model. We
                                                       exact term matching and the score is derived from
     benchmark different DISTIL models:
                                                       a sum of contributions from each query term that
       – DISTILuse : instantiates the student as the   appears in the document. We use an oracle BM25
         pretrained m-USE (Yang et al., 2020) in-      approach: this naming derives from the fact that
         stance;                                       we query the index with the answer rather than the
       – DISTILxlmr : initializes the student model    question. This was done at training time to increase
         with the pretrained XLM-R (Conneau            the probability of the answer to be in the passages
         et al., 2020) transformer;                    consumed by mGEN, so that the generation model
would hopefully learn to extract the answer from         (Tamil, Tagalog) are from CCNet. We addition-
its input, rather than generating it from the question   ally clean all language portions by removing email
only. At inference time, we query the index using        addresses, URLs, extra emojis and punctuations,
the question.                                            and selected 7K for training and 0.7K for validation
                                                         for each language. In this way, we inject both the
3.2    Answer Generation                                 domain-specific (i.e., Wikipedia knowledge) and
Our answer generation modules take a concate-            language-specific (i.e., 16 languages) knowledge
nation of the question and the related documents         into the multilingual pretrained language model via
retrieved by the retrieval module as an input and        MLMing as an intermediate specialization step.
generate an answer. In this section, we first ex-
plain the baseline system which is the basis of our      Augmentation Data Variants To further in-
proposed approaches and then present our special-        vestigate model capability on (1) extracting an-
ization method.                                          swers from English passages or (2) extracting an-
                                                         swers from translated passages, while keeping the
Baseline: mGEN We use mGEN (Multilingual                 Question-Answer pairs in other non-English lan-
Answer Generator; Asai et al. (2021b)) as the base-      guages, we conduct experiments on two augmen-
line for the answer generation phase. They pro-          tation data variants: AUG-QA and AUG-QAP.
pose to take mT5 (Xue et al., 2021), a multilingual      AUG-QA keeps the English passage with the trans-
version of a pretrained transformer-based encoder-       lated Question-Answer pairs, while AUG-QAP
decoder model (Raffel et al., 2020), and fine-tune       translates the English passage to the same language
it for multilingual answer generation. The pre-          as the translated Question-Answer pairs. Detailed
training process of mT5 is based on a variant of         examples are shown in Table 1.
masked language modeling named span-corruption,
in which the objective is to reconstruct continu-        4   Experimental Setup
ously masked tokens in an input sentence (Xue            We demonstrate the effectiveness of our proposed
et al., 2021). For fine-tuning, the model is trained     COQA systems by comparing them to the base-
on a sequence-to-sequence (seq2seq) task as fol-         line models and thoroughly comparing different
lows:                                                    specialization methods from §1.
                         T
         L   L    N
                         Y                               Evaluation Task and Measures Our proposed
      P (a | q , P ) =        p(aL   L     L   N
                                 i |a
AUG-QA                                                                        AUG-QAP
          Q: レゴグループを設立したのは誰ですか?
QA pair
          A: オレ・カーク・クリスチャンセン
          The Lego Group began manufacturing the interlocking toy bricks in 1949.       レゴグループは1949年に連動おもちゃのレンガの製造を開始しました。映画、
          Movies, games, competitions, and six Legoland amusement parks have been       ゲーム、競技会、および6つのレゴランド遊園地がこのブランドで開発されま
          developed under the brand. As of July 2015, 600 billion Lego parts had been   した。2015年7月現在、6,000 億個のレゴパーツが生産されています。2015年
Passage
          produced. In February 2015, Lego replaced Ferrari as Brand Finance’s          2月、レゴはブランドファイナンスの「世界で最も強力なブランド」としてフ
          “world’s most powerful brand”. History. The Lego Group began in the           ェラーリに取って代わりました。歴史。レゴグループは、OleKirkChristiansen
          workshop of Ole Kirk Christiansen                                             のワークショップで始まりました

Table 1: Examples of augmented training instances for AUG-QA and AUG-QAP. Top row: translated question-
answer pair in Japanese. Below are different training examples: (1) AUG-QA: the English Wikipedia passage is
kept with the translated question-answer pair. (2) AUG-QAP: the English Wikipedia passage is translated to the
same language as the question-answer pair.

     Dataset                            Lang                  Train size                 and negative passages obtained from the top 50 re-
     Natural Questions                    en                     76635                   trieval results of the organizer’s mDPR. We merge
                                          ar                     18402                   this dataset with the augmented data, filtering the
                                          bn                     5007
                                          fi                     9768                    latter to get 100k samples for each of the 16 lan-
     XOR-TyDi QA                          ja                     7815
                                          ko                     4319                    guages.
                                          ru                     9290
                                          te                     6759
                                                                                            We base our training data for answer generation
                                                                                         models on the organizer’s datasets with the top 15
   Table 2: The training data size for 8 languages.                                      retrieved documents from the coupled retriever. To
                                                                                         use automatically generated question-answer pairs
   Dataset                     Lang             Dev size            Test size            for each language from §4 for fine-tuning, we align
   MKQA (parallel)               12               1758                5000               the format with retrieved results by randomly sam-
                                 ar                590                1387               pling passages from English Wikipedia as negative
                                 bn                203                490
                                 fi               1368                974                contexts,8 while we keep the seed documents as
   XOR-TyDi QA                   ja               1056                693                positive ones. We explore two ways of merging
                                 ko               1048                473
                                 ru                910                1018               the positive and negative passages: in the "shuffle"
                                 te                873                564
                                                                                         style, the positive passage appears in one of the top
                                 ta                 -                 350
   Surprise
                                 tl                 -                 350                3 documents; in the "non-shuffle" method, the pos-
                                                                                         itive passage always appears on the top. However,
Table 3: The development and test data size for each                                     since these two configurations did not show large
language. The data size for MKQA is equal for all 12                                     differences, we only report the former one in this
languages. Two surprise languages are provided with-                                     paper. We also investigated if translating passages
out development data.
                                                                                         into the different 16 languages 9 may be beneficial
                                                                                         with respect to keeping all the passages in English
ground-truth answers. The overall score is calcu-                                        (AUG-QA). Due to computational limitations, in
lated by using macro-average scores on XOR-TyDi                                          our data augmented setting for generation model
QA and MKQA datasets, and then taking the aver-                                          fine-tuning, we use 2K question-answer pairs with
age F1 scores of both datasets.                                                          positive/negative passages for each language for
                                                                                         our final results.
Data We explicitly state that we did not train on
the development data or the subsets of the Natural                                      Hyperparameters and Optimization For mul-
Questions and TyDi QA, which are used to create                                         tilingual dense passage retrieval, we mostly follow
MKQA or XOR-TyDi QA datasets. This makes                                                the setup provided by the organizers: learning rate
all of our proposed approaches fall into the con-                                       1e−5 with AdamW (Loshchilov and Hutter, 2019),
strained setup proposed by the organizers.                                              linear scheduling with warm-up for 300 steps and
   For training the mDPR variants, we exploit the                                           8
                                                                                               For the negative contexts, we use the passages that were
organizer’s dataset that was obtained from DPR                                           used for generating the question-answer pairs (i.e., the first
Natural Questions (Qu et al., 2021) and XOR-                                             three sub-passages). These were then trimmed down to 100
TyDiQA gold paragraph data. More specifically,                                           tokens. We ensure that the answer is not contained in the
                                                                                         negative contexts through lowercase string-matching.
for training and validation, we always use the ver-                                          9
                                                                                               Using the same Google Translate API adopted for the QA
sion of the dataset containing augmented positive                                        translation in §4.
dropout rate 0.1. For mDPR(AUG) with MixCSE            Language Specialization We compare the eval-
(Wang et al., 2022), we use λ = 0.2 and τ = 0.05       uation results for the fine-tuned answer generation
for the loss (see Appendix B). We train with a batch   model with and without language specialization
size of 16 on 1 GPU for at most 40 epochs, us-         (i.e., MLMing): for XORQA-ar and XORQA-te
ing average rank on the validation data to pick the    we have +2.0 and +1.1 percentage points improve-
checkpoints. The training is done independently of     ment compared to the baseline model (with mT5
mGEN, in a non-iterative fashion.                      trained on 100+ languages). We further distin-
    For retrieving the passages, we use cosine simi-   guish MLM-14 and MLM-16, where the former
larity between question and passage across all pro-    is trained on the released Wikipedia passages for
posed retrieval models, returning the top 100 pas-     14 languages and the latter is trained on the concate-
sages for each of the questions.                       nation of Wikipedia passages and CCNet (Wenzek
    For language-specialized pretraining via MLM,      et al., 2020), to which we resort for the two sur-
we use AdaFactor (Shazeer and Stern, 2018) with        prise languages (Tamil and Tagalog), which were
the learning rate 1e − 5 and linear scheduling with    missing in the Wikipedia data. Overall, MLM-14
warm-up for 2000 steps up to 20 epochs. For mul-       performs better than MLM-16: we hypothesize that
tilingual answer generation fine-tuning, we also       this might be due to the domain difference between
mostly keep the setup from the organizers: learn-      text coming from Wikipedia and CCNet: the latter
ing rate 3e − 5 with AdamW (Loshchilov and Hut-        is not strictly aligned with the structured text (i.e.,
ter, 2019), linear scheduling with warm-up for 500     clean) version of Wikipedia passages, and causes a
steps, and dropout rate as 0.1. We take the top 15     slight drop in performance as we train for 2 addi-
documents from the retrieved results as our input      tional languages.
and truncate the input sequence after 16, 000 to-
                                                       Data Augmentation Data augmentation is con-
kens to fit the model into the memory constraints
                                                       sidered a way to mitigate the performance of low-
of our available infrastructure.
                                                       resource languages while reaching performance on
                                                       par with high-resource languages (Kumar et al.,
5   Results and Discussion
                                                       2019; Riabi et al., 2021; Shakeri et al., 2021). Two
Results Overview Results in Table 4 show the           variations are considered: AUG-QA and AUG-
comparison between the baseline and our proposed       QAP, while the former concatenates the XOR-Tydi
methods on XOR-TyDi QA while Table 5 shows             QA training set with the additional augmented data
the results on MKQA. While we can see that the ad-     with translated Question-Answer pairs, and the lat-
ditional pretraining on the answer generation model    ter is made from the concatenation of both XOR-
(mDPR+MLM-14) helps to outperform the base-            Tydi QA training set and the translated Question-
line in XOR-TyDi QA, the same approach leads           Answer-Passage.10 We assume that by also trans-
to a degradation in MKQA. None of the proposed         lating passages, the setting should be closer to test
methods for the retrieval module improved over         time when the retrieval module can retrieve pas-
the baseline mDPR in both datasets, as shown in        sages in any of 14 languages (without the two sur-
Table 6.                                               prise languages). In contrast, in AUG-QA setting,
                                                       the input passages to the answer generation are al-
Unsupervised vs Supervised Retrieval In all            ways in English. Models trained with additional
evaluation settings, unsupervised retrieval meth-      AUG-QA data could increase the capacity of seeing
ods underperform supervised methods by a large         more data for unseen languages, while AUG-QAP
margin (see Table 4 and 5). This might be due          may further enhance the ability of the model to gen-
to the nature of the task, which is to find a doc-     erate answers from the translated passages. As ex-
ument containing an answer, rather than simply         pected, models trained with additional augmented
finding a document similar to the input question.      data have better performance compared to the ones
For this reason, such an objective might not align     without. The encouraging finding states that, es-
well with models specialized in semantic similar-      pecially for two surprise languages, the language
ity (Litschko et al., 2022). Fine-tuning mBERT,        specialized models fine-tuned with both XOR-Tydi
however, makes the model learn to focus on retriev-      10
                                                            XORQA-Tydi QA training set is with 8 languages (see Ta-
ing an answer-containing document and not simply       ble 2) and augmented data are with all 16 languages included
retrieving documents similar to the question.          in the test set.
XOR-TyDi QA
   Models                                   ar              bn               fi             ja       ko              ru              te             Avg.
   mDPR + mGEN (baseline 1)               49.66           33.99           39.54         39.72         25.59         40.98           36.16         37.949
   Unsupervised Retrieval
   OracleBM25 + MLM-14                     0.34             0.49            0.52           2.56        0.19          0.57            5.16          1.404
   EnsembleRank + MLM-14                   0.34             0.49            1.33           2.56        0.38          6.27           16.21          3.161
   Supervised Retrieval
   mDPR(AUG) with MixCSE + MLM-14         20.94            7.18           15.27         23.16         10.25         19.23           10.53         15.223
   mDPR(AUG) + MLM-14                     24.99           15.19           20.33         22.31         10.68         18.82           11.97         17.754
   mDPR + MLM-14                          51.66           31.96           38.68         40.89         25.35         39.87           37.26         37.951
   mDPR + MLM-14(XORQA & AUG-QA)          49.41           32.90           37.95         40.97         24.22         39.29           35.76         37.213
   mDPR + MLM-14(XORQA & AUG-QAP)         48.79           33.73           38.33         39.87         25.26         39.11           37.94         37.577
   mDPR + MLM-16                          49.92           31.16           37.20         39.92         24.63         38.78           34.30         36.558
   mDPR + MLM-16(XORQA & AUG-QA)          49.45           31.59           38.33         40.44         23.83         38.67           35.92         36.889
   mDPR + MLM-16(XORQA & AUG-QAP)         48.21           34.20           38.78         40.76         24.81         39.49           34.37         37.231

            Table 4: Evaluation results on XOR-TyDi QA test data with F1 and macro-average F1 scores.

                                                                                 MKQA                                                  Surprise
 Models                             ar    en       es        fi      ja         km ko        ms      ru       sv     tr     zh-cn     ta     tl       Avg.
 mDPR + mGEN (baseline1)          9.52 36.34 27.23 22.70 15.89 6.00 7.68 25.11 14.60 26.69 21.66 13.78 0.00 12.78 17.141
 Unsupervised Retrieval
 OracleBM25 + MLM-14              2.80 10.81 3.70 3.29 5.89 1.53 1.51 5.49                          1.85    7.42 2.94       1.81     0.00   8.23      4.090
 EnsembleRank + MLM-14            6.43 31.66 20.02 17.38 10.68 6.24 4.38 21.03                      6.27    21.09 17.13     7.22     0.00   8.39     12.709
 Supervised Retrieval
 mDPR(AUG) with MixCSE + MLM-14   4.71   28.06    12.78      8.22    7.92    5.44   2.74    12.90    4.65   13.86    8.38    3.99    0.00    6.72     8.599
 mDPR(AUG) + MLM-14               5.64   29.23    17.27     15.51    7.81    5.83   3.38    16.57    6.80   17.21   13.10    4.53    0.00    8.09    10.785
 mDPR + MLM-14                    8.73   35.32    25.54     20.42   14.27    6.06   6.78    24.10   12.01   25.97   20.27   13.95    0.00   11.14    16.040
 mDPR + MLM-14(XORQA & AUG-QA)    8.46   35.12    24.74     19.50   14.38    5.62   7.22    23.24   11.46   24.49   19.67   15.79    0.86   12.18    15.909
 mDPR + MLM-14(XORQA & AUG-QAP)   8.48   34.73    25.46     20.09   14.61    5.00   7.42    24.16   12.04   25.61   19.62   15.60    0.00   12.41    16.089
 mDPR + MLM-16                    8.15   34.14    24.85     19.38   13.73    5.93   6.51    22.21   11.46   24.91   18.82   13.62    0.00   12.59    15.451
 mDPR + MLM-16(XORQA & AUG-QA)    8.21   34.06    25.65     20.14   14.22    5.80   6.70    24.40   11.82   25.71   19.92   15.42    0.40   12.36    16.057
 mDPR + MLM-16(XORQA & AUG-QAP)   8.08   33.89    24.94     20.50   14.11    5.15   7.15    22.95   12.95   24.93   19.68   15.27    0.14   13.07    15.915

Table 5: Evaluation results on MKQA test dataset and two surprise languages with F1 and macro-average F1
scores.

     Models                                         Avg.                    critical for contrastive training, as larger batches
     mDPR + mGEN (baseline1)                        27.55                   provide a stronger signal due to a higher number of
     Unsupervised Retrieval                                                 negatives. For this reason, we think that the mDPR
     OracleBM25 + MLM-14                             2.75
     EnsembleRank + MLM-wiki14                       7.94                   variants have not been thoroughly investigated and
     Supervised Retrieval                                                   might still prove beneficial when trained with larger
     mDPR(AUG) with MixCSE + MLM-14                 11.91
     mDPR(AUG) + MLM-14                             14.27                   batches.
     mDPR + MLM-14                                  27.00
     mDPR + MLM-14(XORQA & AUG-QA)                  26.56
     mDPR + MLM-14(XORQA & AUG-QAP)
     mDPR + MLM-16
                                                    26.83
                                                    26.00
                                                                            6       Related Work
     mDPR + MLM-16(XORQA & AUG-QA)                  26.47
     mDPR + MLM-16(XORQA & AUG-QAP)                 26.57
                                                                            Passage Retrieval and Answer Generation
                                                                            To improve the information accessibility, open-
Table 6: Results of macro-average F1 for two QA
datasets: XOR-TyDi QA, MKQA, and two surprise                               retrieval question answering systems are attracting
languages.                                                                  much attention in NLP applications (Chen et al.,
                                                                            2017; Karpukhin et al., 2020). Rajpurkar et al.
                                                                            (2016) were one of the early works to present a
QA and AUG-QAP drastically improve the perfor-                              benchmark that requires systems to understand a
mance of these unseen, low-resource languages.                              passage to produce an answer to a given question.
                                                                            (Kwiatkowski et al., 2019) presented a more chal-
mDPR variants results As shown in Table 4                                   lenging and realistic dataset with questions col-
and 5, we can see that the mDPR variants we                                 lected from a search engine. To tackle these com-
trained are considerably worse than the baseline.                           plex and knowledge-demanding QA tasks, Lewis
We think this is mainly caused by the limited batch                         et al. (2020) proposed to first retrieve related doc-
size used (16) which is a constraint due to our in-                         uments from a given question and use them as ad-
frastructure. The number of samples in a batch is                           ditional aids to predict an answer. In particular,
they explored a general-purpose fine-tuning recipe       itory as the publicly available multilingual pre-
for retrieval-augmented generation models, which         trained language model specialized in 14 and 16
combine pretrained parametric and non-parametric         languages.11 We also release our code and data,
memory for language generation. Izacard and              which make our approach completely transparent
Grave (2021), they solved the problem in two steps,      and fully reproducible. All resources developed as
first retrieving support passages before processing      part of this work are publicly available at: https:
them with a seq2seq model, and Sun et al. (2021)         //github.com/umanlp/ZusammenQA.
further extended to the cross-lingual conversational
domain. Some works are explored with translate-          8        Conclusion
then-answer approach, in which texts are translated      We introduced a framework for a cross-lingual
into English, making the task monolingual (Ture          open-retrieval question-answering system, using
and Boschee, 2016; Asai et al., 2021a). While this       data augmentation with specialized models in a
approach is conceptually simple, it is known to          constrained setup. Given a question, we first re-
cause the error propagation problem in which er-         trieved top relevant documents and further gener-
rors of the translation get amplified in the answer      ated the answer with the specialized models (i.e.,
generation stage (Zhu et al., 2019). To mitigate this    MLM-ing on Wikipedia passages) along with the
problem, Asai et al. (2021b) proposed to extend          augmented data variants. We demonstrated the ef-
Lewis et al. (2020) by using multilingual models         fectiveness of data augmentation techniques with
for both the passage retrieval (Devlin et al., 2019)     language- and domain-specialized additional train-
and answer generation (Xue et al., 2021).                ing, especially for resource-lean languages. How-
Data Augmentation Data augmentation is a                 ever, there are still remaining challenges, especially
common approach to reduce the data sparsity for          in the retrieval model training with limited com-
deep learning models in NLP (Feng et al., 2021).         putational resources. Our future efforts will be to
For Question Answering (QA), data augmentation           focus on more efficient approaches of both multi-
has been used to generate paraphrases via back-          lingual passage retrieval and multilingual answer
translation (Longpre et al., 2019), to replace parts     generation (Abdaoui et al., 2020) with the investiga-
of the input text with translations (Singh et al.,       tion of different data augmentation techniques (Zhu
2019), and to generate novel questions or answers        et al., 2019). We hope that our generated QA lan-
(Riabi et al., 2021; Shakeri et al., 2021; Dugan         guage resources with the released models can cat-
et al., 2022). In the cross-lingual setting, available   alyze the research focus on resource-lean languages
data have been translated into different languages       for COQA systems.
(Singh et al., 2019; Kumar et al., 2019; Riabi et al.,
2021; Shakeri et al., 2021) and language models          References
have been used to train question and answer gener-
                                                         Amine Abdaoui, Camille Pradel, and Grégoire Sigel.
ation models (Kumar et al., 2019; Chi et al., 2020;       2020. Load what you need: Smaller versions of
Riabi et al., 2021; Shakeri et al., 2021).                mutililingual BERT. In Proceedings of SustaiNLP:
   Our approach is different from previous work in        Workshop on Simple and Efficient Natural Language
Cross-lingual Question Answering task in that it          Processing, pages 119–123, Online. Association for
                                                          Computational Linguistics.
only requires English passages to augment the train-
ing data, as answers are generated automatically         Akari Asai, Jungo Kasai, Jonathan H. Clark, Kenton
from the trained model by Dugan et al. (2022). In          Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021a.
                                                           XOR QA: cross-lingual open-retrieval question an-
addition, our filtering heuristics remove incorrectly      swering. In Proceedings of the 2021 Conference
generated question-answer pairs, which allows us           of the North American Chapter of the Association
to keep only question-answer pairs with answers            for Computational Linguistics: Human Language
that are more likely to be translated correctly, thus      Technologies, NAACL-HLT 2021, Online, June 6-
                                                           11, 2021, pages 547–564. Association for Compu-
limiting the problem of error propagation.                 tational Linguistics.
7   Reproducibility                                      Akari Asai, Xinyan Yu, Jungo Kasai, and Hanna Ha-
                                                           jishirzi. 2021b. One question answering model for
To ensure full reproducibility of our results and            11
                                                            MLM-14:        https://huggingface.co/
further fuel research on COQA systems, we re-            umanlp/mt5-mlm-wiki14;   MLM-16:   https:
lease the model within the Huggingface repos-            //huggingface.co/umanlp/mt5-mlm-16
many languages with cross-lingual dense passage re-    Liam Dugan, Eleni Miltsakaki, Shriyash Upad-
  trieval. In Advances in Neural Information Process-      hyay, Etan Ginsberg, Hannah Gonzalez, Dayheon
  ing Systems 34: Annual Conference on Neural In-          Choi, Chuning Yuan, and Chris Callison-Burch.
  formation Processing Systems 2021, NeurIPS 2021,         2022. A feasibility study of answer-agnostic ques-
  December 6-14, 2021, virtual, pages 7547–7560.           tion generation for education.     arXiv preprint
                                                           arXiv:2203.08685.
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan
  Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Song-       Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen
  hao Piao, Ming Zhou, et al. 2020. Unilmv2: Pseudo-       Arivazhagan, and Wei Wang. 2020. Language-
  masked language models for unified language model        agnostic bert sentence embedding.
  pre-training. In International Conference on Ma-
  chine Learning, pages 642–652. PMLR.                   Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chan-
                                                            dar, Soroush Vosoughi, Teruko Mitamura, and Ed-
Danqi Chen, Adam Fisch, Jason Weston, and Antoine
                                                            uard Hovy. 2021. A survey of data augmentation
  Bordes. 2017. Reading Wikipedia to answer open-
                                                            approaches for NLP. In Findings of the Association
  domain questions. In Proceedings of the 55th An-
                                                            for Computational Linguistics: ACL-IJCNLP 2021,
  nual Meeting of the Association for Computational
                                                            pages 968–988, Online. Association for Computa-
  Linguistics (Volume 1: Long Papers), pages 1870–
                                                            tional Linguistics.
  1879, Vancouver, Canada. Association for Computa-
  tional Linguistics.
                                                         Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xian-            SimCSE: Simple contrastive learning of sentence
  Ling Mao, and Heyan Huang. 2020. Cross-lingual            embeddings. In Proceedings of the 2021 Conference
  natural language generation via pre-training. Pro-        on Empirical Methods in Natural Language Process-
  ceedings of the AAAI Conference on Artificial Intel-      ing, pages 6894–6910, Online and Punta Cana, Do-
  ligence, 34(05):7570–7577.                                minican Republic. Association for Computational
                                                            Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal,
  Vishrav Chaudhary, Guillaume Wenzek, Francisco         Goran Glavaš, Mladen Karan, and Ivan Vulić. 2020.
  Guzmán, Edouard Grave, Myle Ott, Luke Zettle-            XHate-999: Analyzing and detecting abusive lan-
  moyer, and Veselin Stoyanov. 2020. Unsupervised          guage across domains and languages. In Proceed-
  cross-lingual representation learning at scale. In       ings of the 28th International Conference on Com-
  Proceedings of the 58th Annual Meeting of the Asso-      putational Linguistics, pages 6350–6365, Barcelona,
  ciation for Computational Linguistics, pages 8440–       Spain (Online). International Committee on Compu-
  8451, Online. Association for Computational Lin-         tational Linguistics.
  guistics.
                                                         Chia-Chien Hung, Anne Lauscher, Ivan Vulić, Si-
Alexis Conneau and Guillaume Lample. 2019. Cross-          mone Paolo Ponzetto, and Goran Glavaš. 2022.
  lingual language model pretraining. In Advances in       Multi2 WOZ: A robust multilingual dataset and con-
  Neural Information Processing Systems, volume 32.        versational pretraining for task-oriented dialog. Ac-
  Curran Associates, Inc.                                  cepted for publication in the North American Chap-
                                                           ter of the Association for Computational Linguistics:
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel
                                                           NAACL 2022.
  Campos, and Jimmy Lin. 2021. Ms marco: Bench-
  marking ranking models in the large-data regime. In
                                                         Gautier Izacard and Edouard Grave. 2021. Leveraging
  Proceedings of the 44th International ACM SIGIR
                                                           passage retrieval with generative models for open
  Conference on Research and Development in Infor-
                                                           domain question answering. In Proceedings of the
  mation Retrieval, pages 1566–1576.
                                                           16th Conference of the European Chapter of the As-
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and              sociation for Computational Linguistics: Main Vol-
   Kristina Toutanova. 2019. BERT: Pre-training of         ume, pages 874–880, Online. Association for Com-
   deep bidirectional transformers for language under-     putational Linguistics.
   standing. In Proceedings of the 2019 Conference
   of the North American Chapter of the Association      K Sparck Jonesa, S Walkerb, and SE Robertsonb.
   for Computational Linguistics: Human Language           2000. A probabilistic model of information re-
  Technologies, Volume 1 (Long and Short Papers),          trieval: development and comparative experiments
   pages 4171–4186, Minneapolis, Minnesota. Associ-        part 2. Information Processing and Management,
   ation for Computational Linguistics.                    36(809):840.

Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi-            Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika
  aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou,            Bali, and Monojit Choudhury. 2020. The state and
  and Hsiao-Wuen Hon. 2019. Unified language               fate of linguistic diversity and inclusion in the NLP
  model pre-training for natural language understand-      world. In Proceedings of the 58th Annual Meet-
  ing and generation. In Advances in Neural Informa-       ing of the Association for Computational Linguistics,
  tion Processing Systems, volume 32. Curran Asso-         pages 6282–6293, Online. Association for Computa-
  ciates, Inc.                                             tional Linguistics.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick      Shayne Longpre, Yi Lu, Zhucheng Tu, and Chris
  Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and         DuBois. 2019. An exploration of data augmentation
  Wen-tau Yih. 2020. Dense passage retrieval for           and sampling techniques for domain-agnostic ques-
  open-domain question answering. In Proceedings of        tion answering. In Proceedings of the 2nd Workshop
  the 2020 Conference on Empirical Methods in Nat-         on Machine Reading for Question Answering, pages
  ural Language Processing (EMNLP), pages 6769–            220–227, Hong Kong, China. Association for Com-
  6781, Online. Association for Computational Lin-         putational Linguistics.
  guistics.
                                                         Ilya Loshchilov and Frank Hutter. 2019. Decoupled
Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee,           weight decay regularization. In International Con-
  Ganesh Ramakrishnan, and Preethi Jyothi. 2019.            ference on Learning Representations.
  Cross-lingual training for automatic question gen-
  eration. In Proceedings of the 57th Annual Meet-       Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang
  ing of the Association for Computational Linguis-        Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and
  tics, pages 4863–4872, Florence, Italy. Association      Haifeng Wang. 2021. Rocketqa: An optimized train-
  for Computational Linguistics.                           ing approach to dense passage retrieval for open-
                                                           domain question answering. In Proceedings of the
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red-          2021 Conference of the North American Chapter
  field, Michael Collins, Ankur Parikh, Chris Al-          of the Association for Computational Linguistics:
  berti, Danielle Epstein, Illia Polosukhin, Jacob De-     Human Language Technologies, NAACL-HLT 2021,
  vlin, Kenton Lee, Kristina Toutanova, Llion Jones,       Online, June 6-11, 2021, pages 5835–5847. Associ-
  Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai,           ation for Computational Linguistics.
  Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019.
  Natural questions: A benchmark for question an-        Colin Raffel, Noam Shazeer, Adam Roberts, Kather-
  swering research. Transactions of the Association        ine Lee, Sharan Narang, Michael Matena, Yanqi
  for Computational Linguistics, 7:452–466.                Zhou, Wei Li, and Peter J. Liu. 2020. Exploring
                                                           the limits of transfer learning with a unified text-to-
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova.        text transformer. Journal of Machine Learning Re-
  2019. Latent retrieval for weakly supervised open        search, 21(140):1–67.
  domain question answering. In Proceedings of the
  57th Annual Meeting of the Association for Com-        Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
  putational Linguistics, pages 6086–6096, Florence,       Percy Liang. 2016. SQuAD: 100,000+ questions for
  Italy. Association for Computational Linguistics.        machine comprehension of text. In Proceedings of
                                                           the 2016 Conference on Empirical Methods in Natu-
Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-          ral Language Processing, pages 2383–2392, Austin,
  tus, Fabio Petroni, Vladimir Karpukhin, Naman            Texas. Association for Computational Linguistics.
  Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih,
  Tim Rocktäschel, Sebastian Riedel, and Douwe           Nils Reimers and Iryna Gurevych. 2019. Sentence-
  Kiela. 2020. Retrieval-augmented generation for          BERT: Sentence embeddings using Siamese BERT-
  knowledge-intensive NLP tasks. In Advances in            networks. In Proceedings of the 2019 Conference on
  Neural Information Processing Systems 33: Annual         Empirical Methods in Natural Language Processing
  Conference on Neural Information Processing Sys-         and the 9th International Joint Conference on Natu-
  tems 2020, NeurIPS 2020, December 6-12, 2020,            ral Language Processing (EMNLP-IJCNLP), pages
  virtual.                                                 3982–3992, Hong Kong, China. Association for
                                                           Computational Linguistics.
Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang,
  Yiming Yang, and Lei Li. 2020. On the sentence         Nils Reimers and Iryna Gurevych. 2020. Making
  embeddings from pre-trained language models. In          monolingual sentence embeddings multilingual us-
  Proceedings of the 2020 Conference on Empirical          ing knowledge distillation. In Proceedings of the
  Methods in Natural Language Processing (EMNLP),          2020 Conference on Empirical Methods in Natural
  pages 9119–9130, Online. Association for Computa-        Language Processing (EMNLP), pages 4512–4525,
  tional Linguistics.                                      Online. Association for Computational Linguistics.

Robert Litschko, Ivan Vulić, Simone Paolo Ponzetto,     Arij Riabi, Thomas Scialom, Rachel Keraron, Benoît
  and Goran Glavaš. 2022. On cross-lingual retrieval       Sagot, Djamé Seddah, and Jacopo Staiano. 2021.
  with multilingual text encoders. Information Re-         Synthetic data augmentation for zero-shot cross-
  trieval Journal, 25(2):149–183.                          lingual question answering. In Proceedings of the
                                                           2021 Conference on Empirical Methods in Natural
Ekaterina Loginova, Stalin Varanasi, and Günter Neu-       Language Processing, pages 7016–7030, Online and
  mann. 2021. Towards end-to-end multilingual ques-        Punta Cana, Dominican Republic. Association for
  tion answering. Inf. Syst. Frontiers, 23(1):227–241.     Computational Linguistics.
Shayne Longpre, Yi Lu, and Joachim Daiber. 2020.         Victor Sanh, Lysandre Debut, Julien Chaumond, and
  MKQA: A linguistically diverse benchmark for mul-        Thomas Wolf. 2019. Distilbert, a distilled version
  tilingual open domain question answering. CoRR,          of BERT: smaller, faster, cheaper and lighter. CoRR,
  abs/2007.15207.                                          abs/1910.01108.
Siamak Shakeri, Noah Constant, Mihir Kale, and Lint-        Extracting high quality monolingual datasets from
   ing Xue. 2021. Towards zero-shot multilingual            web crawl data. In Proceedings of the 12th Lan-
   synthetic question and answer generation for cross-      guage Resources and Evaluation Conference, pages
   lingual reading comprehension. In Proceedings of         4003–4012, Marseille, France. European Language
   the 14th International Conference on Natural Lan-        Resources Association.
   guage Generation, pages 35–45, Aberdeen, Scot-
   land, UK. Association for Computational Linguis-       Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian
   tics.                                                    Riedel, and Luke Zettlemoyer. 2020. Scalable zero-
                                                            shot entity linking with dense entity retrieval. In
Noam Shazeer and Mitchell Stern. 2018. Adafactor:           Proceedings of the 2020 Conference on Empirical
  Adaptive learning rates with sublinear memory cost.       Methods in Natural Language Processing (EMNLP),
  In International Conference on Machine Learning,          pages 6397–6407, Online. Association for Computa-
  pages 4596–4604. PMLR.                                    tional Linguistics.

Jasdeep Singh, Bryan McCann, Nitish Shirish Keskar,       Linting Xue, Noah Constant, Adam Roberts, Mi-
   Caiming Xiong, and Richard Socher. 2019. XLDA:           hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya
   cross-lingual data augmentation for natural lan-         Barua, and Colin Raffel. 2021. mT5: A massively
   guage inference and question answering. CoRR,            multilingual pre-trained text-to-text transformer. In
   abs/1905.11471.                                          Proceedings of the 2021 Conference of the North
                                                            American Chapter of the Association for Computa-
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-         tional Linguistics: Human Language Technologies,
  Yan Liu. 2020. Mpnet: Masked and permuted pre-            pages 483–498, Online. Association for Computa-
  training for language understanding. In Advances          tional Linguistics.
  in Neural Information Processing Systems 33: An-
  nual Conference on Neural Information Processing        Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy
  Systems 2020, NeurIPS 2020, December 6-12, 2020,          Guo, Jax Law, Noah Constant, Gustavo Hernan-
  virtual.                                                  dez Abrego, Steve Yuan, Chris Tar, Yun-hsuan Sung,
                                                            Brian Strope, and Ray Kurzweil. 2020. Multilingual
K. Sparck Jones, S. Walker, and S.E. Robertson. 2000.       universal sentence encoder for semantic retrieval.
  A probabilistic model of information retrieval: de-       In Proceedings of the 58th Annual Meeting of the
  velopment and comparative experiments: Part 1. In-        Association for Computational Linguistics: System
  formation Processing & Management, 36(6):779–             Demonstrations, pages 87–94, Online. Association
  808.                                                      for Computational Linguistics.

Weiwei Sun, Chuan Meng, Qi Meng, Zhaochun Ren,            Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Ji-
  Pengjie Ren, Zhumin Chen, and Maarten de Ri-              ajun Zhang, Shaonan Wang, and Chengqing Zong.
  jke. 2021. Conversations powered by cross-lingual         2019. NCLS: Neural cross-lingual summarization.
  knowledge. In Proceedings of the 44th International       In Proceedings of the 2019 Conference on Empirical
 ACM SIGIR Conference on Research and Develop-              Methods in Natural Language Processing and the
  ment in Information Retrieval, pages 1442–1451.           9th International Joint Conference on Natural Lan-
                                                            guage Processing (EMNLP-IJCNLP), pages 3054–
Ferhan Ture and Elizabeth Boschee. 2016. Learn-             3064, Hong Kong, China. Association for Computa-
  ing to translate for multilingual question answering.     tional Linguistics.
  In Proceedings of the 2016 Conference on Empiri-
  cal Methods in Natural Language Processing, pages       Pierre Zweigenbaum, Serge Sharoff, and Reinhard
  573–584, Austin, Texas. Association for Computa-           Rapp. 2018. Overview of the third bucc shared task:
  tional Linguistics.                                        Spotting parallel sentences in comparable corpora.
                                                             In Proceedings of 11th Workshop on Building and
Hao Wang, Yangguang Li, Zhen Huang, Yong Dou,                Using Comparable Corpora, pages 39–42.
  Lingpeng Kong, and Jing Shao. 2022. SNCSE:
  contrastive learning for unsupervised sentence em-
  bedding with soft negative samples.         CoRR,
  abs/2201.05979.

Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan
 Yang, and Ming Zhou. 2020. Minilm: Deep self-
  attention distillation for task-agnostic compression
  of pre-trained transformers. In Advances in Neural
 Information Processing Systems 33: Annual Con-
  ference on Neural Information Processing Systems
 2020, NeurIPS 2020, December 6-12, 2020, virtual.

Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con-
  neau, Vishrav Chaudhary, Francisco Guzmán, Ar-
  mand Joulin, and Edouard Grave. 2020. CCNet:
A    Data Augmentation Example
Figure 2 shows an example of a Wikipedia passage about An American in Paris. The passage in orange is
the set of sentences whose length does not exceed 512 tokens, which is the first of three sub-passages
used for generating question-answer pairs. The generated pairs can be seen at the bottom of the figure.
Questions and answers highlighted in red are those that satisfy the filtering heuristics detailed in §2.2.
These are then translated into other languages.

Title: An American in Paris

URL: https://en.wikipedia.org/wiki?curid=309

Text: An American in Paris is a jazz-influenced orchestral piece by American composer George Gershwin written
in 1928. It was inspired by the time that Gershwin had spent in Paris and evokes the sights and energy of the French capital in
the 1920s. Gershwin composed “An American in Paris” on commission from conductor Walter Damrosch. He scored the
piece for the standard instruments of the symphony orchestra plus celesta, saxophones, and automobile horns. He brought
back some Parisian taxi horns for the New York premiere of the composition, which took place on December 13, 1928 in
Carnegie Hall, with Damrosch conducting the New York Philharmonic. He completed the orchestration on November 18, less
than four weeks before the work’s premiere. He collaborated on the original program notes with critic and composer Deems
Taylor. Gershwin was attracted by Maurice Ravel’s unusual chords, and Gershwin went on his first trip to Paris in 1926 ready
to study with Ravel. After his initial student audition with Ravel turned into a sharing of musical theories, Ravel said he could
not teach him, saying, "Why be a second-rate Ravel when you can be a first-rate Gershwin?" While the studies were cut short,
that 1926 trip resulted in a piece entitled “Very Parisienne”, the initial version of “An American in Paris”, written as a ‘thank
you note’ to Gershwin’s hosts, Robert and Mabel Shirmer. Gershwin called it “a rhapsodic ballet”; it is written freely and in a
much more modern idiom than his prior works. Gershwin strongly encouraged Ravel to come to the United States for a tour.
To this end, upon his return to New York, Gershwin joined the efforts of Ravel’s friend Robert Schmitz, a pianist Ravel had
met during the war, to urge Ravel to tour the U.S. Schmitz was the head of Pro Musica, promoting Franco-American musical
relations, and was able to offer Ravel a $10,000 fee for the tour, an enticement Gershwin knew would be important to Ravel. ...

Question: When was an American in Paris written?
Answer: 1928
Type: Number

Question: When did George Gershwin write an American in Paris?
Answer: the 1920s
Type: Contains number

Question: Who was the conductor of “An American in Paris”?
Answer: Walter Damrosch
Type: Who

Question: What was the name of the instrument that Gershwin scored for?
Answer: automobile horns

Question: What was the name of the orchestral piece Gershwin composed in 1928?
Answer: New York Philharmonic

Question: When did Gershwin complete the orchestration of “An American in Paris”?
Answer: November 18
Type: Date

Question: Who did Gershwin collaborate on the original program notes with?
Answer: Deems Taylor
Type: Who

Figure 2: Example English question-answer pairs (on the bottom) generated from the highlighted text (in yellow)
in the passage. The highlighted question-answer pairs (in red) are those that were kept after filtering.
B    MixCSE Loss
The MixCSE loss described in 3.1 is given by:

                                                       exp(cos(eqi , ep+ )/τ )
                                                                       i
    Lmixcse = − log                               Pn
                      exp(cos(eqi , ep+ )/τ ) +    j=1 exp(cos(eqi , ep− )/τ ) + exp(cos(eqi , SG(ẽi ))/τ )
                                      i                                    i,j

where τ is a fixed temperature and SG is the stop-gradient operator, which prevents backpropagation from
flowing into the mixed negative (ẽi ).
You can also read