Linguistically Informed Hindi-English Neural Machine Translation

Page created by Sue Sharp
 
CONTINUE READING
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 3698–3703
                                                                                                                       Marseille, 11–16 May 2020
                                                                     c European Language Resources Association (ELRA), licensed under CC-BY-NC

  Linguistically Informed Hindi-English Neural Machine Translation
                    Vikrant Goyal, Pruthwik Mishra, Dipti Misra Sharma
                                  Language Technologies Research Center (LTRC)
                                               IIIT Hyderabad, India
                                {vikrant.goyal, pruthwik.mishra}@research.iiit.ac.in,
                                                   dipti@iiit.ac.in

                                                       Abstract
Hindi-English Machine Translation is a challenging problem, owing to multiple factors including the morphological
complexity and relatively free word order of Hindi, in addition to the lack of sufficient parallel training data. Neural
Machine Translation (NMT) is a rapidly advancing MT paradigm and has shown promising results for many language
pairs, especially in large training data scenarios. To overcome the data sparsity issue caused by the lack of large
parallel corpora for Hindi-English, we propose a method to employ additional linguistic knowledge which is encoded by
different phenomena depicted by Hindi. We generalize the embedding layer of the state-of-the-art Transformer model
to incorporate linguistic features like POS tag, lemma and morph features to improve the translation performance. We
compare the results obtained on incorporating this knowledge with the baseline systems and demonstrate significant
performance improvements. Although, the Transformer NMT models have a strong efficacy to learn language constructs,
we show that the usage of specific features further help in improving the translation performance.

Keywords: Neural Machine Translation, Transformer, Linguistic Features

                 1. Introduction                                current state-of-the-art Transformer models make the
                                                                explicit linguistic features redundant or if they can be
In recent years, Neural Machine Translation (Luong              easily incorporated to provide further improvements
et al., 2015; Bahdanau et al., 2014; Johnson et al.,            in translation performance.
2017; Wu et al., 2017; Vaswani et al., 2017) (NMT)
has become the most prominent approach to Machine               Also, there is an immense scope in the development of
Translation (MT) due to its simplicity, generality              translation systems which cater to the specific char-
and effectiveness. In NMT, a single neural network              acteristics of languages under consideration. Indian
often consisting of an encoder and a decoder is used            languages are not an exception to this, however, they
to directly maximize the conditional probabilities              add certain specifications which need to be considered
of target sentences given the source sentences in an            carefully for effective translation. English and Hindi
end-to-end paradigm. NMT models have been shown                 are respectively reported to be the 3rd and 4th largest
to surpass the performance of previously dominant               spoken languages in the world 1 and this fact makes
statistical machine translation (SMT) (Koehn, 2009)             Hindi-English as an ideal language pair for translation
on many well-established translation tasks. Unlike              studies. But Hindi-English MT is a challenging task
SMT, NMT does not rely on sub-modules and explicit              because it’s a low resource language pair and both
linguistic features in crafting the translation . Instead,      the languages belongs to different language families.
it learns the translation knowledge directly from               Hindi is a morphologically rich language and depict
parallel sentences without resorting to additional              unique characteristics, which are significantly different
linguistic analysis.                                            from languages such as English. Some of these
                                                                characteristics are the relatively free word-order with
Although NMT is a promising approach, it still lacks            a tendency towards the Subject-Object-Verb (SOV)
the ability of modeling deeper semantic and syntactic           construction, a high degree of inflection and usage of
aspects of the language. In machine translation with            reduplication.
a low-resource setting, resolving data sparseness and
semantic ambiguity problems can help improve its                Also, when translating between morphologically rich
performance. Addition of explicit linguistic knowledge          and free word order languages like Hindi and the other
may be of great benefits to NMT models, potentially             end of morphologically less complicated and word
reducing language ambiguity and alleviating data                order specific languages like English, the well-known
sparseness further. Some recent studies have shown              issues of missing words and data sparsity arise; and
that incorporating linguistic features in the NMT               hence affect the accuracy of translation and leads to
model can improve the translation performance                   more out-of-vocabulary (OOV) words.
(Sennrich and Haddow, 2016; Niehues and Cho, 2017;
Li et al., 2018). But most of the previous works have           In this paper, we present our efforts towards building
shown the effectiveness of usage of linguistic features
with the RNN models. However, it is essential to                    1
                                                                   http://en.wikipedia.org/wiki/List_of_languages_by
verify whether the strong learning capability of the            _number_of_native_speakers

                                                             3698
a Hindi to English Neural Machine Translation sys-
tem using the state-of-the-art Transformer models via
exploiting the explicit linguistic features on the source
side. We generalize the embedding layer of the encoder
in the standard Transformer architecture to support
the inclusion of arbitrary features, in addition to the
baseline token feature, where the token can either be
a word or a subword. We add morphological features,
part-of-speech (POS) tags and lemma as input features
to Hindi-English NMT model.

      2.    Neural Machine Translation
2.1. Encoder-Decoder Framework
Given a bilingual sentence pair (x, y), an NMT
model learns its parameters θ by maximizing the log-
likelihood P (y|x; θ), which is usually decomposed into
the product of the conditional
                         ∏m      probability of each tar-
get word: P (y|x; θ) = t=1 Pθ (yt |y1 , y2 , .., yt−1 , x; θ),
where m is the length of sentence y.

An encoder-decoder framework (Bahdanau et al.,
2014; Luong et al., 2015; Gehring et al., 2017; Vaswani
et al., 2017) is usually adopted to model the condi-
tional probability P (y|x; θ). The encoder maps the in-
put sentence x into a set of hidden representations h,              Figure 1: Transformer model architecture from
and the decoder generates the target token yt at posi-              (Vaswani et al., 2017)
tion t using the previously generated target tokens y
stacking 6 identical layers of multi-head attention             be merged with some merge function other than con-
with feed-forward networks. However, here there are             catenation. In this paper, we focused on a number
two multi-head attention sub-layers: i) a decoder               of well known linguistic features. The main empiri-
self-attention and ii) a encoder-decoder attention.             cal question that we address in this paper is if pro-
The decoder self-attention attends on the previous              viding linguistic input features to the state-of-the art
predictions made step by step, masked by one posi-              Transformer model improves the translation quality
tion. The second multi-head attention performs an               of Hindi-English neural machine translation systems,
attention between the final encoder representation              or if the information emerges from training encoder-
and the decoder representation.                                 decoder models on raw text, making its inclusion via
                                                                explicit features redundant. All linguistic features are
To summarize, the Transformer model consists of three           predicted automatically; we use Hindi Shallow Parser
                                                                2
different attentions: i) the encoder self-attention, in           to annotate Hindi raw text with the linguistic fea-
which each position attends to all positions in the             tures. We here discuss the individual features in more
previous layer, including the position itself, ii) the          detail.
encoder-decoder attention, in which each position of
the decoder attends to all positions in the last encoder        3.1.    Lemma
layer, and iii) the decoder self-attention, in which each       In a normal NMT model each word form is treated as a
position attends to all previous positions including            token in itself. This means that the translation model
the current position.                                           treats, say, the Hindi word pustak (book) completely
                                                                independent of the word pustakein (books). Any in-
                                                                stance of pustak in the training data does not add
2.3. Adding Input Linguistic Features                           any knowledge to the translation of pustakein. In the
Our main innovation over the standard Transformer               extreme case, while the translation of pustak may be
encoder-decoder architecture is that we represent its           known to the model, the word pustakein may be un-
encoder input as a combination of features (Alexan-             known and the system will not be able to translate it.
drescu and Kirchhoff, 2006).                                    While this problem does not show up as strongly in En-
                                                                glish due to the very limited morphological inflection
Let E ∈ Rm×K be the word embedding matrix for the               in English, it does constitute a significant problem for
standard Transformer encoder with no input features             morphologically rich languages such as Hindi, Telugu,
where m is the word embedding size and K is the vo-             Tamil etc. Lemmatization can reduce data sparseness,
cabulary size of the source language. Therefore, the            and allow inflectional variants of the same word to ex-
m-dimensional word embedding e(xi ) of the token xi             plicitly share a representation in the model. In princi-
(one-hot encoded representation i.e. 1-of-K vector) in          ple, neural models can learn that inflectional variants
the input sequence x = (x1 , x2 , ..., xn ) can be written      are semantically related, and represent them as simi-
as:                                                             lar points in the continuous vector space (Mikolov et
                     e(xi ) = Exi                      (4)      al., 2013). However, while this has been demonstrated
                                                                for high-frequency words, we expect that a lemmatized
We generalize this embedding layer to some arbitrary            representation increases data efficiency. We verify the
number of features |F |:                                        use of lemmas in both word based model and also in a
                                                                subword model.
                               |F |
               ē(xi ) = mergej=1 (Ej xij )           (5)
                                                                3.2.    POS Tags
               mj ×Kj                                           Linguistic resources such as part-of-speech (POS) tags
where Ej ∈ R          are the feature embedding ma-
trices with mj as the feature embedding size and Kj             have been extensively used in statistical machine trans-
as the vocabulary size of the jth feature. Basically we         lation (SMT) frameworks and have yielded better per-
look up separate embeddings for each feature, which             formances. POS tags provide the linguistic knowledge
are then merged by some merge function. In this work,           and the syntactic role of each token in the context,
we experiment with concatenation of separate embed-             which helps in information extraction and reducing
dings (each with some different embedding sizes) for            data ambiguity. However, usage of such linguistic an-
each feature as the merge operation similar to what             notations has not been explored much in neural ma-
was done by (Sennrich and Haddow, 2016) in a RNN                chine translation (NMT) systems.
based attentional NMT system. The length of the final
                                                                3.3.    Morphological Features
merged vector matches the total embedding size, that
   ∑|F |                                                        Machine Translation suffers data sparseness problem
is j=1 mj = m and the rest of the model remains
unchanged.                                                      when translating to/from morphologically rich and
                                                                complex languages such as Hindi. Thus morphologi-
                                                                cal analysis may help to handle data sparseness and
       3. Linguistic Input Features                             improve translation quality. Different word types in
Our generalized model described in the the previous
section supports an arbitrary number of input fea-                  2
                                                                    http://ltrc.iiit.ac.in/showfile.php?filename=downloads
tures, where each of the feature embeddings can also            /shallow_parser.php

                                                             3700
Hindi have different sets of morph features. For exam-
ple, verbs have person, number, gender, tense, aspect             Table 1: Statistics of our processed parallel data.
                                                                      Dataset       Sentences         Tokens
and nouns have case, number, gender. For some words
the features may also be underspecified. Therefore, we                IITB Train    1,528,631     21.5M / 20.3M
treat the concatenation of all morphological features of               IITB Test      2,507        62.3k / 55.8k
the word as a string and treat this string as a separate               IITB Dev        520         9.7k / 10.3k
feature value for each word along with other linguistic
features.
                                                              4.2.     Data Processing
3.4. Using Word-level Features in the                         We use Moses (Koehn et al., 2007) toolkit for tok-
     Subword Model                                            enization and cleaning the English side of the data.
                                                              Hindi side of the data is first normalized with Indic
Neural Machine Translation relies on first mapping            NLP library3 followed by tokenization with the same
each word into the vector space, and traditionally            library. As our preprocessing step, we remove all the
we have a word vector corresponding to each word              sentences of length greater than 80 from our training
in a fixed vocabulary. Addressing the problem of              corpus and lowercase the English side of the data. We
data scarcity and the hardness of the system to learn         use BPE segmentation with 32k merge operations. We
high quality representations for rare words, (Sennrich        use Hindi Shallow Parser 4 to extract all the linguistic
et al., 2015) proposed to learn subword units and             features (i.e POS tags, morph features, lemma) and
perform translation at a subword level. Subword               annotate Hindi text with the same. We also remove
segmentation of words is achieved using Byte Pair             all punctuations from both Hindi and English data to
Encoding (BPE) which has been shown to work better            avoid any possible errors thrown by the shallow parser.
than UNK replacement techniques. In this work, we             All the linguistic features are joined with the original
also experiment with subword models. With the help            word or a subword using the pipe (“|”) symbol.
of BPE, the vocabulary size is reduced drastically,
thereby decreasing the OOV (out of vocabulary words)          4.3.     Training Details
rate and we no longer need to prune the vocabularies.         For all of our experiments, we use OpenNMT-py (Klein
After the translation, we do an extra post processing         et al., 2018) toolkit. We use Transformer model with 6
step to convert the target language subword units             layers in both encoder and decoder each with 512 hid-
back to normal words. We find this approach to be             den units. The word embedding size is set to 512 with
very helpful in handling rare word representations            8 heads. The training is done in batches of maximum
when translating from Hindi to English.                       4096 tokens at a time with dropout set to 0.3. We use
                                                              Adam optimizer to optimize model parameters. We
But we note that in BPE segmentation, some sub-               validate the model every 5,000 steps via BLEU (Pap-
word units are potentially ambiguous, and can either          ineni et al., 2002) and perplexity on the development
be a separate word, or a subword segment of a larger          set.
word. Also, text is represented as a sequence of sub-
word units with no explicit word boundaries. Explicit         Table 2: Size of embedding layer of linguistic features,
word boundaries are potentially helpful to learn which        in a system that includes all features and contrastive
symbols to attend to, and when to forget information          experiments that add a single feature over the baseline.
in the Transformer layers. We use an annotation of                                        Embedding size
subword structure similar to popular IOB format for                       Features
                                                                                           all      single
chunking and named entity recognition, marking if a
subword unit in the text forms the beginning (B), in-                    subword tags        6          5
side (I), or end (E) of a word. A separate tag (O) is                      POS tags          10        10
used if a subword unit corresponds to the full word.                    Morph Features       20        20
To incorporate the word-level linguistic features in a                      Lemma           100        150
subword model, we copy the word’s feature values to                     Word or subword      *          *
all of its subword units.
                                                              The embedding layer size of the all the linguistic fea-
                                                              tures used varies, and is set to bring the total embed-
         4.    Experimental Settings                          ding layer size to 512 so as to ensure that the perfor-
4.1. Dataset                                                  mance improvements are not simply due to an increase
                                                              in the number of model parameters. All the features
In our experiments, we use IIT-Bombay (Kunchukut-             have different vocabulary sizes and after performing
tan et al., 2017) Hindi-English parallel data. The            various experiments we found the optimum embedding
training corpus consists of data from mixed domains.
There are roughly 1.5M samples in the training data               3
                                                                  https://anoopkunchukuttan.github.io/indic_nlp_library/
from diverse sources, while the development and test              4
                                                                  http://ltrc.iiit.ac.in/showfile.php?filename=downloads
sets are from news domains.                                   /shallow_parser.php

                                                           3701
size for each of the features listed in table 2, which is      for adding such information. Because phrase-based
basically a hyperparameter in our setting.                     models cannot easily generalize to new feature com-
                                                               binations, the individual models either treat each fea-
                    5. Results                                 ture combination as an atomic unit, resulting in data
We report the results of usage of linguistic features          sparsity, or assume independence between features, for
both in a normal word based model and also in a sub-           instance by having separate language models for words
word model. We also perform contrastive experiments            and POS tags. In contrast, we exploit the strong gen-
in which only a single feature is added to the base-           eralization ability of neural networks, and expect that
line. Table 3 shows our main results for Hindi-English         even new feature combinations, e.g. a word that ap-
word based model. All the linguistic features added in         pears in a novel syntactic function, are handled grace-
isolation proved to be effective in improving the trans-       fully. Linguistic features have also been used in Neu-
lation performance of the word based model. But the            ral MT but all of them have shown the effectiveness of
combination of all linguistic features together gave us        usage of linguistic features with the RNN model (Sen-
the lowest improvement of 0.19 BLEU. Experiments               nrich and Haddow, 2016; Niehues and Cho, 2017; Li
demonstrates that the gain from the different features         et al., 2018). However, it is essential to verify whether
is not fully cumulative and the information encoded in         the strong learning capability of the current state-of-
different features overlaps.                                   the-art Transformer models make the explicit linguis-
                                                               tic features redundant or if they can be easily incorpo-
                                                               rated to provide further improvements in performance.
Table 3: Contrastive experiments for a word based              Also, the effectiveness of linguistic features in build-
Hindi-English Transformer model with individual lin-           ing NMT models for a low resource language pair like
guistic features.                                              Hindi-English where Hindi is a Morphologically rich
              System         BLEU                              language has not been shown earlier.
          Word baseline           17.13
            POS tags          17.51 (+0.38)                         7.   Conclusion and Future Work
             Lemma           17.65 (+0.52)                     In this paper, we investigate whether the linguistic
          Morph features      17.44 (+0.31)                    input features are helpful in improving the translation
           All features       17.32 (+0.19)                    performance of the state-of-the-art Transformer based
                                                               NMT model, and our empirical results show that this
                                                               is the case. We show our results on Hindi-English,
Table 4 shows our results for a subword Hindi-English
                                                               a low resource language pair where Hindi is a mor-
model. Except the lemma feature, all other features
                                                               phologically rich and a free word order language
used independently in the subword model shows sig-
                                                               whereas on the other end we have English which
nificant BLEU improvements. The reason behind
                                                               is morphologically less complicated and word order
the lemma feature not being helpful in improving the
                                                               specific language. We empirically test the inclusion
translation performance can be the nature of subword
                                                               of various linguistic features, including lemmas, part-
model itself. Translation at subword level inherently
                                                               of-speech tags, morphological features and IOB tags
captures the linguistic information at the root level.
                                                               for a (sub)word model. Our experiments show that
The best performance is achieved when using IOB tags,
                                                               linguistic input features improve both word-based and
POS tags and morph features in a subword model.
                                                               subword baselines. The subword NMT model with
                                                               IOB tags, POS tags and morph features outperforms
Table 4: Contrastive experiments for a subword Hindi-          the simple word based NMT baseline by more than 2
English Transformer model with individual linguistic           BLEU points.
features.
              System                       BLEU                In the future, we expect several developments on the
        Subword baseline                       18.47           usefulness of linguistic input features in neural machine
            IOB tags                       18.64 (+0.17)       translation. The machine learning capability of neural
            POS tags                       19.11 (+0.64)       architectures is likely to increase, decreasing the ben-
             Lemma                         17.99 (-0.48)       efit provided by the features we tested. Therefore in
                                                               future, to overcome the data sparsity issue, we will
         Morph features                    19.02 (+0.55)
                                                               will work on better methods to incorporate linguistic
 IOB, POS tags and Morph features         19.21 (+0.74)
                                                               information in such neural architectures.
           All features                    18.34 (-0.13)
                                                                                 8.   References
               6.    Related Work                              Alexandrescu, A. and Kirchhoff, K. (2006). Fac-
Factored translation models are often used in phrase-            tored neural language models. In Proceedings of
based SMT (Koehn and Hoang, 2007) as a means to in-              the Human Language Technology Conference of the
corporate extra linguistic information. However, neu-            NAACL, Companion Volume: Short Papers, pages
ral MT can provide a much more flexible mechanism                1–4. Association for Computational Linguistics.

                                                            3702
Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural          Sennrich, R. and Haddow, B. (2016). Linguistic input
  machine translation by jointly learning to align and          features improve neural machine translation. arXiv
  translate. arXiv preprint arXiv:1409.0473.                    preprint arXiv:1606.02892.
Gehring, J., Auli, M., Grangier, D., Yarats, D., and          Sennrich, R., Haddow, B., and Birch, A. (2015). Neu-
  Dauphin, Y. N. (2017). Convolutional sequence to              ral machine translation of rare words with subword
  sequence learning. In Proceedings of the 34th Inter-          units. arXiv preprint arXiv:1508.07909.
  national Conference on Machine Learning-Volume              Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J.,
  70, pages 1243–1252. JMLR. org.                               Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin,
Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu,           I. (2017). Attention is all you need. In Advances in
  Y., Chen, Z., Thorat, N., Viégas, F., Wattenberg,             Neural Information Processing Systems, pages 5998–
  M., Corrado, G., et al. (2017). Google’s multilingual         6008.
  neural machine translation system: Enabling zero-           Wu, L., Xia, Y., Zhao, L., Tian, F., Qin, T., Lai, J.,
  shot translation. Transactions of the Association for         and Liu, T.-Y. (2017). Adversarial neural machine
  Computational Linguistics, 5:339–351.                         translation. arXiv preprint arXiv:1704.06933.
Klein, G., Kim, Y., Deng, Y., Nguyen, V., Senel-
  lart, J., and Rush, A. M. (2018). Opennmt:
  Neural machine translation toolkit. arXiv preprint
  arXiv:1805.11462.
Koehn, P. and Hoang, H. (2007). Factored transla-
  tion models. In Proceedings of the 2007 joint confer-
  ence on empirical methods in natural language pro-
  cessing and computational natural language learning
  (EMNLP-CoNLL), pages 868–876.
Koehn, P., Hoang, H., Birch, A., Callison-Burch, C.,
  Federico, M., Bertoldi, N., Cowan, B., Shen, W.,
  Moran, C., Zens, R., et al. (2007). Moses: Open
  source toolkit for statistical machine translation. In
  Proceedings of the 45th annual meeting of the asso-
  ciation for computational linguistics companion vol-
  ume proceedings of the demo and poster sessions,
  pages 177–180.
Koehn, P. (2009). Statistical machine translation.
  Cambridge University Press.
Kunchukuttan, A., Mehta, P., and Bhattacharyya, P.
  (2017). The iit bombay english-hindi parallel cor-
  pus. arXiv preprint arXiv:1710.02855.
Li, Q., Wong, D. F., Chao, L. S., Zhu, M., Xiao,
  T., Zhu, J., and Zhang, M. (2018). Linguis-
  tic knowledge-aware neural machine translation.
  IEEE/ACM Transactions on Audio, Speech and
  Language Processing (TASLP), 26(12):2341–2354.
Luong, M.-T., Pham, H., and Manning, C. D. (2015).
  Effective approaches to attention-based neural ma-
  chine translation. arXiv preprint arXiv:1508.04025.
Mikolov, T., Yih, W.-t., and Zweig, G. (2013). Lin-
  guistic regularities in continuous space word repre-
  sentations. In Proceedings of the 2013 Conference of
  the North American Chapter of the Association for
  Computational Linguistics: Human Language Tech-
  nologies, pages 746–751.
Niehues, J. and Cho, E. (2017). Exploiting linguistic
  resources for neural machine translation using multi-
  task learning. arXiv preprint arXiv:1708.00993.
Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J.
  (2002). Bleu: a method for automatic evaluation of
  machine translation. In Proceedings of the 40th an-
  nual meeting on association for computational lin-
  guistics, pages 311–318. Association for Computa-
  tional Linguistics.

                                                           3703
You can also read