Workshop on Automatic Simultaneous Translation Challenges, Recent Advances, and Future Directions Proceedings of the Workshop - ACL 2020 - July ...

Page created by Andre Hughes
 
CONTINUE READING
ACL 2020

 Workshop on Automatic Simultaneous Translation
Challenges, Recent Advances, and Future Directions

 Proceedings of the Workshop

 July, 10, 2020
c 2020 The Association for Computational Linguistics

Order copies of this and other ACL proceedings from:

 Association for Computational Linguistics (ACL)
 209 N. Eighth Street
 Stroudsburg, PA 18360
 USA
 Tel: +1-570-476-8006
 Fax: +1-570-476-0860
 acl@aclweb.org

ISBN 978-1-952148-23-1 (Volume 1)

 ii
Introduction

Welcome to the First Workshop on Automatic Simultaneous Translation (AutoSimTrans). Simultaneous
translation, which performs translation concurrently with the source speech, is widely useful in many
scenarios such as international conferences, negotiations, press releases, legal proceedings, and medicine.
It combines the AI technologies of machine translation (MT), automatic speech recognition (ASR), and
text-to-speech synthesis (TTS), and is becoming a cutting-edge research field.

As an emerging and interdisciplinary field, simultaneous translation faces many great challenges, and is
considered one of the holy grails of AI. This workshop will bring together researchers and practitioners
in machine translation, speech processing, and human interpretation, to discuss recent advances and open
challenges of simultaneous translation.

We organized a simultaneous translation shared task on Chinese-English. We released a dataset for open
research, which covers speeches in a wide range of domains, such as IT, economy, culture, biology, arts,
etc.

We also have two sets of keynote speakers: Hua Wu, Colin Cherry, Jordan Boyd-Graber, Qun Liu
from simultaneous translation, Kay-Fan Cheung and Barry Slaughter Olsen from human interpretation
research. We hope this workshop will greatly increase the communication and crossfertilization between
the two fields.

We look forward to an exciting workshop.

Hua Wu, Colin Cherry, Liang Huang, Zhongjun He, Mark Liberman, James Cross, Yang Liu

 iii
Organizers:
 Hua Wu, Baidu Inc.
 Colin Cherry, Google
 Liang Huang, Oregon State University and Baidu Research
 Zhongjun He, Baidu Inc.
 Mark Liberman, University of Pennsylvania
 James Cross, Facebook
 Yang Liu, Tsinghua University

Program Committee:
 Mingbo Ma, Baidu Research, USA
 Naveen Arivazhagan, Google, USA
 Chung-Cheng Chiu, Google, USA
 Kenneth Church, Baidu Research, USA
 Yang Feng, CAS/ICT, China
 George Foster, Google, Canada
 Alvin Grissom II, Ursinus College, USA
 He He, NYU, USA
 Alina Karakanta, FBK-Trento, Italy
 Wei Li, Google, USA
 Hairong Liu, Baidu Research, USA
 Kaibo Liu, Baidu Research, USA
 Wolfgang Macherey, Google, USA
 Jan Niehues, Maastricht U., Netherlands
 Yusuke Oda, Google, Japan
 Colin Raffel, Google, USA
 Elizabeth Salesky, CMU, USA
 Jiajun Zhang, CAS/IA, China
 Ruiqing Zhang, Baidu Inc. China
 Renjie Zheng, Oregon State Univ., USA

Invited Speakers:

 Hua Wu, Chief Scientist of NLP, Baidu Inc., China
 Colin Cherry, Research Scientist in Google Translate, Google Inc., Montreal, Canada
 Jordan Boyd-Graber, Associate Professor, University of Maryland, USA
 Qun Liu, Chief Scientist of Speech and Language Computing, Huawei Noah’s Ark Lab, China
 Kay-Fan Cheung, Associate Professor, The Hong Kong Polytechnic University,
 Member of the International Association of Conference Interpreters (AIIC), China
 Barry Slaughter Olsen, Professor, the Middlebury Institute of International Studies
 and Conference Interpreter, Member of AIIC, USA

 v
Table of Contents

Dynamic Sentence Boundary Detection for Simultaneous Translation
 Ruiqing Zhang and Chuanqiang Zhang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

End-to-End Speech Translation with Adversarial Training
 Xuancai Li, Chen Kehai, Tiejun Zhao and Muyun Yang. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

Robust Neural Machine Translation with ASR Errors
 Haiyang Xue, Yang Feng, Shuhao Gu and Wei Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Improving Autoregressive NMT with Non-Autoregressive Model
 Long Zhou, Jiajun Zhang and Chengqing Zong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Modeling Discourse Structure for Document-level Neural Machine Translation
 Junxuan Chen, Xiang Li, Jiarui Zhang, Chulun Zhou, Jianwei Cui, Bin Wang and Jinsong Su . . . 30

BIT’s system for the AutoSimTrans 2020
 Minqin Li, Haodong Cheng, Yuanjie Wang, Sijia Zhang, Liting Wu and Yuhang Guo . . . . . . . . . . 37

 vii
Conference Program

Friday, July 10, 2020

8:50–9:00 Opening Remarks

9:00–11:00 Session 1

9:00–9:30 Invited Talk 1: Colin Cherry

9:30–10:00 Invited Talk 2: Barry Slaughter Olsen

10:00–10:30 Invited Talk 3: Jordan Boyd-Graber

10:30–11:00 Q&A

11:00–14:00 Lunch

17:00–19:00 Session 2

17:00–17:30 Invited Talk 4: Hua Wu

17:30–18:00 Invited Talk 5: Kay-Fan Cheung

18:00–18:30 Invited Talk 6: Qun Liu

18:30–19:00 Q&A

 ix
Friday, July 10, 2020 (continued)

19:00–19:30 Break

19:30–21:00 Session 3: Research Paper and System Description

19:30–19:40 Dynamic Sentence Boundary Detection for Simultaneous Translation
 Ruiqing Zhang and Chuanqiang Zhang

19:40–19:50 End-to-End Speech Translation with Adversarial Training
 Xuancai Li, Chen Kehai, Tiejun Zhao and Muyun Yang

19:50–20:00 Robust Neural Machine Translation with ASR Errors
 Haiyang Xue, Yang Feng, Shuhao Gu and Wei Chen

20:00–20:10 Improving Autoregressive NMT with Non-Autoregressive Model
 Long Zhou, Jiajun Zhang and Chengqing Zong

20:10–20:20 Modeling Discourse Structure for Document-level Neural Machine Translation
 Junxuan Chen, Xiang Li, Jiarui Zhang, Chulun Zhou, Jianwei Cui, Bin Wang and
 Jinsong Su

20:20–20:30 BIT’s system for the AutoSimTrans 2020
 Minqin Li, Haodong Cheng, Yuanjie Wang, Sijia Zhang, Liting Wu and Yuhang
 Guo

20:30–21:00 Q&A

21:00–21:10 Closing Remarks

 x
Dynamic Sentence Boundary Detection for Simultaneous Translation

 Ruiqing Zhang Chuanqiang Zhang
 Baidu, Inc., / Beijing, China Baidu, Inc., / Beijing, China
 zhangruiqing01@baidu.com zhangchuanqiang@baidu.com

 Abstract Therefore, sentence boundary detection (or sen-
 tence segmentation) 1 plays an important role to
 Simultaneous Translation is a great challenge narrow the gap between the ASR and transcrip-
 in which translation starts before the source tion. A good segmentation will not only improve
 sentence finished. Most studies take transcrip- translation quality but also reduce latency.
 tion as input and focus on balancing trans-
 Studies of sentence segmentation falls into one
 lation quality and latency for each sentence.
 However, most ASR systems can not pro- of the following two bins:
 vide accurate sentence boundaries in realtime. • The strategy performs segmentation from a
 Thus it is a key problem to segment sentences
 speech perspective. Fügen et al. (2007) and
 for the word streaming before translation. In
 this paper, we propose a novel method for Bangalore et al. (2012) used prosodic pauses
 sentence boundary detection that takes it as a in speech recognition as segmentation bound-
 multi-class classification task under the end- aries. This method is effective in dialogue
 to-end pre-training framework. Experiments scenarios, with clear silence during the conver-
 show significant improvements both in terms sation. However, it does not work well in long
 of translation quality and latency. speech audio, such as lecture scenarios. Ac-
 cording to Venuti (2012), silence-based chunk-
1 Introduction ing accounts for only 6.6%, 10%, and 17.1%
 in English, French, and German, respectively.
Simultaneous Translation aims to translate the
 Indicating that in most cases, it cannot effec-
speech of a source language into a target language
 tively detect boundaries for streaming words.
as quickly as possible without interrupting the
speaker. Typically, a simultaneous translation sys- • The strategy takes segmentation as a standard
tem is comprised of an auto-speech-recognition text processing problem. The studies consid-
(ASR) model and a machine translation (MT) ered the problem as classification or sequence
model. The ASR model transforms the audio signal labeling, based on SVM, (Sridhar et al., 2013)
into the text of source language and the MT model conditional random filed (CRFs) (Lu and Ng,
translates the source text into the target language. 2010; Wang et al., 2012; Ueffing et al., 2013).
 Recent studies on simultaneous translation (Cho Other researches utilized language model, ei-
and Esipova, 2016; Ma et al., 2019; Arivazhagan ther based on N-gram (Wang et al., 2016)
et al., 2019) focus on the trade-off between trans- or recurrent neural network (RNN)(Tilk and
lation quality and latency. They explore a policy Alumäe, 2015).
that determines when to begin translating with the
input of a stream of transcription. However, there In this paper, we use classification to solve the
is a gap between transcription and ASR that some problem of sentence segmentation from the per-
ASR model doesn’t provide punctuations or can- spective of text. Instead of predicting a sentence
not provide accurate punctuation in realtime, while boundary for a certain position, we propose a multi-
the transcription is always well-formed. See Fig- position boundary prediction approach. Specifi-
ure 1 for illustration. Without sentence boundaries, cally, for a source text x = {x1 , ..., xT }, we calcu-
the state-of-the-art wait-k model takes insufficient late the probability of predicting sentence boundary
text as input and produces an incorrect translation. 1
 We use both terms interchangeably in this paper.

 1
 Proceedings of the 1st Workshop on Automatic Simultaneous Translation, pages 1–9
 July 10, 2020. c 2020 Association for Computational Linguistics
src One of two things is going to happen . Either it ’s going to …
 Reference Eines von zwei Dingen wird passieren. Entweder wird ...
 wait3 Eines von zwei Dingen wird passieren. Entweder wird …
src without
 One of two things is going to happen either it ’s going to …
 boundary
 wait3 Eines von zwei Dingen wird passieren entweder es ist geht dass …

Figure 1: An English-to-German example that translates from a streaming source with and without sentence bound-
aries. We take the wait-K model (Ma et al., 2019) for illustration, K=3 here. The wait3 model first performs three
READ (wait) action at the beginning of each sentence (as shown in blue), and then alternating one READ with
one WRITE action in the following steps. Given the input source without sentence boundaries (in the 4th line), the
wait3 model (in the 5th line) doesn’t take the three READ action at the beginning of following sentences. There-
fore, the English phrase “it’s going to”, which should have been translated as “wird”, produced a meaningless
translation “es ist geht dass” with limited context during wait3 model inference.

after xt , t = T, T − 1, ..., T − M . Thus the la- (NSP) task that is to predict whether a sentence is
tency of translation can be controlled within L+M the subsequent sentence of the first sentence. Sun
words, where L is the length of the sentence. In- et al. (2019) proposed a pre-training framework
spired by the recent pre-training techniques (Devlin ERNIE, by integrating more knowledge. Rather
et al., 2019; Sun et al., 2019) that successfully used than masking single tokens, they proposed to mask
in many NLP tasks, we used a pre-trained model for a group of words on different levels, such as enti-
initialization and fine-tune the model on the source ties, phrases, etc. The model achieves state-of-the-
side of the sentence. Overall, the contributions are art performances on many NLP tasks.
as follows: In this paper, we train our model under the
 ERNIE framework.
 • We propose a novel sentence segmentation
 method based on pre-trained language repre- 3 Our Method
 sentations, which have been successfully used
 in various NLP tasks. Given a streaming input x = {x1 , ..., xt , ..., xT },
 the task of sentence segmentation is to determine
 • Our method dynamically predicts the bound- whether xt ∈ x is the end of a sentence. Thus the
 ary at multiple locations, rather than a specific task can be considered as a classification problem,
 location, achieving high accuracy with low that is p(yt |x, θ), where yt ∈ {0, 1}. However,
 latency. in simultaneous translation scenario, the latency
 is unacceptable if we take the full source text as
2 Background
 contextual information. Thus we should limit the
Recent studies show that the pre-training and fine- context size and make a decision dynamically.
tuning framework achieves significant improve- As the input is a word streaming, the sentence
ments in various NLP tasks. Generally, a model boundary detection problem can be transformed as,
is first pre-trained on large unlabeled data. After whether there exists a sentence boundary until the
that, on the fine-tuning step, the model is initialized current word xt . Thus we can use the word stream-
by the parameters obtained by the pre-training step ing as a context to make a prediction. We propose a
and fine-tuned using labeled data for specific tasks. multi-class classification model to predict the prob-
 Devlin et al. (2019) proposed a generalized ability of a few words before xt as sentence bound-
framework BERT, to learn language representa- aries (Section 3.1). We use the ERNIE framework
tions based on a deep Transformer (Vaswani et al., to first pre-train a language representation and then
2017) encoder. Rather than traditionally train a fine-tune it to sentence boundary detection (Section
language model from-left-to-right or from-right- 3.2). We also propose a dynamic voted inference
to-left, they proposed a masked language model strategy (Section 3.3).
(MLM) that randomly replace some tokens in a
sequence by a placeholder (mask) and trained the 3.1 The Model
model to predict the original tokens. They also For a streaming input x = {x1 , ..., xt }, our goal
pre-train the model for the next sentence prediction is to detect whether there is a sentence boundary

 2
 … ℎ 
 … ℎ ". "
 xt−M +1 . Generally, more contextual information
 … ℎ ". " will help the classifier improve the precision (Sec-
 … ℎ ". " tion 4.5).

Classes ϕ 0 −1 −2 3.2 Training Objective
 
 Our training data is extracted from paragraphs.
 
 Question marks, exclamation marks, and semi-
 …
 colons are mapped to periods and all other punctu-
 Masked
Language
 … ation symbols are removed from the corpora. Then
 Model … for every two adjacent sentences in a paragraph,
 0 1 2 … −2 −1 we concatenate them to form a long sequence, x.
 We record the position of the period as r and then
 0 1 2 … −2 −1 
 … ℎ remove the period from the sequence.
 For x = (x1 , x2 , ..., xN ) with N words, we gen-
Figure 2: Illustration of the dynamic classification erate r + M samples for t = 1, 2, ..., (r + M ), in
model. M = 2 means there are 4 classes. We use the form of < (x1 , ..., xt ), yt >, where yt is the
ERNIE to train a classifier. Class φ means that there is
no sentence boundary in the stream till now. Class −m
 label that:
  
m = 0, 1, 2 means that xt−m is the end of a sentence φ, if t < r
and we then put a period after it. yt = (1)
 −(t − r), if t ∈ [r, r + M ]
 Note that if the length of the second sentence is
till the current word xt from last sentence bound- less than M, we concatenate subsequent sentences
ary. Rather than a binary classification that detects until r + M samples are collected. Then we define
whether xt is a sentence boundary, we propose a the loss function as follows:
multi-class method. The classes are as follows: X Xr−1
  J (θ) = log( p(yt = φ|x≤t ; θ)
 
  φ, no sentence boundary detected
 
  (x,r)∈D t=1
  xt is the end of a sentence (2)
  0,
  r+M
 X
 y= −1, xt−1 is the end of a sentence + p(yt = −(t − r))|x≤t ; θ))
 
 
 
  ... t=r
 
 
  where D is the dataset that contains pairs of con-
 −M, xt−M is the end of a sentence
 catenated sentences x and its corresponding posi-
where M is the maximum offset size to the current tion of the removed periods r. M is a hyperparam-
state. Thus, we have M + 2 classes. eter denotes the number of waiting words.
 See Figure 2 for illustration. We set M = 2, Note that our method differs from previous work
indicating that the model predicts 4 classes for the in the manner of classification. Sridhar et al. (2013)
input stream. If the output class is φ, meaning that predicts whether a word xt labeled as the end of a
the model does not detect any sentence boundary. sentence or not by a binary classification:
Thus the model will continue receiving new words.
If the output class is 0, indicating that the current p(yt = 0|xt+2 t+2
 t−2 ) + p(yt = 1|xt−2 ) = 1 (3)
word xt is the end of a sentence and we put a period where yt = 0 means xt is not the end of a sentence
after the word. Similarly, class −m denotes to add and yt = 1 means xt is the end. xt+2 t−2 denotes 5
a sentence boundary after xt−m . While a sentence words xt−2 , xt−1 , ..., xt+2 .
boundary is detected, the sentence will be extracted Some other language-model based work (Wang
from the stream and sent to the MT system as an et al., 2016) calculates probabilities over all words
input for translation. The sentence detection then in the vocabulary including the period:
continues from xt−m+1 . X
 Each time our system receives a new word xt , p(yt = w|x≤t ) = 1 (4)
the classifier predicts probabilities for the last M +1 w∈V ∪“.”

words as sentence boundaries. If the output class and decides whether xt is a sentence boundary by
is φ, the classifier receives a new word xt+1 , and comparing the probability of yt =“.” and yt =
recompute the probabilities for xt+1 , xt , xt−1 , ..., xt+1 .

 3
Dataset Sentences Tokens/s
 1 2 … −4 −3 −2 WMT 14 4.4M 23.22
 Train
 = 0| 1 , … , −2 IWSLT 14 0.19M 20.26
 IWSLT
 Test 7040 19.03
 2010-2014
 1 2 … −4 −3 −2 −1
 = −1| 1 , … , −1 Table 1: Experimental Corpora without punctuation.
 Token/s denotes the number of tokens per sentence in
 English.
 1 2 … −4 −3 −2 −1 
 = −2| 1 , … , 
 boundary, it needs t − t0 + 1 probabilities:
Figure 3: Our voting algorithm for online prediction
with M equals to 2. Input the stream text till xt , the t−t0
 X
 1
overall probability of add a sentence boundary after p(y = −m|x1 , ..., xt+m ) (6)
 t − t0 + 1
xt−2 is averaged by the M + 1 probabilities in red, m=0
while for xt−1 (in green) and xt (in blue), the number
of deterministic probability is less than M + 1. where t0 ∈ [t − M, t].
 If more than one sentence boundary probabilities
 for xt−M , ..., xt exceeds the threshold θT h at the
 The performance of these methods is limited by same time, we choose the front-most position as
incomplete semantics, without considering global a sentence boundary. This is consistent with our
boundary detection. In our methods, we leverage training process, that is, if there is a sample of two
more future words and restrict classes globally: or more sentence boundaries, we ignore the fol-
 lowing and label the class yt according to the first
 M
 X boundary. This is because we generate samples
 p(yt = φ|x≤t ) + p(yt = −m|x≤t ) = 1 (5) with each period in the original paragraph as de-
 m=0 picted in Section 3.2. From another point of view,
 the strategy can also compensate for some incor-
The restriction is motivated that in a lecture sce- rect suppression of adjacent boundaries, thereby
nario, where a sentence could not be very short that improving online prediction accuracy.
contains only 1 or 2 words. Thus, the probability
distribution prohibits that adjacent words to be the 4 Experiment
end of sentences at the same time.
 Experiments are conducted on English-German
 (En-De) simultaneous translation. We evaluate 1)
3.3 Dynamic Inference
 the F-score2 of sentence boundary detection and 2)
At inference time, we predict sentence boundaries case-sensitive tokenized 4-gram BLEU (Papineni
sequentially with a dynamic voting strategy. Each et al., 2002) as the final translation effect of the
time a new word xt is received, we predict the prob- segmented sentences. To reduce the impact of the
ability of M + 1 classes as shown in the bottom of ASR system, we use the transcription without punc-
Figure 3, then calculate if the probability of previ- tuation in both training and evaluation.
ous M + 1 positions (xt−M , xt−M +1 , xt ) is larger The datasets used in our experiments are listed
then a threshold θT h . If yes, we add a sentence in Table 1. We use two parallel corpus from ma-
boundary at the corresponding position. Otherwise, chine translation task: WMT 143 and IWSLT 14
we continue to receive new words. 4 . WMT 14 is a text translation corpus including

 Note that the probability is adopted as the voted 4.4M sentences, mainly on news and web sources.
probability. While the probability of adding a sen- And IWSLT 14 is a speech translation corpus of
tence boundary after xt−M has M + 1 probabilities TED lectures with transcribed text and correspond-
to calculate the average, the number of probabili- ing translation. Here we only use the text part in
ties to determine whether it is a sentence boundary it, containing 0.19M sentences in the training set.
at subsequent positions is less than M + 1. Here 2
 harmonic average of the precision and recall
we use the voted average of existing probabilities. 3
 http://www.statmt.org/wmt14/translation-task.html
Specifically, to judge whether xt0 is a sentence 4
 https://wit3.fbk.eu/

 4
Method Hyperparameter F-score BLEU avgCW maxCW
 Oracle NA 1.0 22.76 NA NA
 N-gram N=5, θT h = e0.0 0.46 17.83 6.64 56
 N-gram N=5, θT h = e2.0 0.48 19.20 13.43 161
 T-LSTM d=256 0.55 20.46 10.14 53
 dynamic-force θl = 40, θT h = 0.5 0.74 22.01 14.43 40
 dynamic-base θT h = 0.5 0.74 21.93 14.58 50

Table 2: Segmentation Performance trained on IWSLT2014. All methods are conducted with future words M
equals to 1.

We train the machine translation model on WMT 4.1 Overall Results
14 with the base version of the Transformer model
 Table 2 reports the results of source sentence seg-
(Vaswani et al., 2017), achieving a BLEU score of
 mentation on En-De translation, where the latency
27.2 on newstest2014. And our sentence boundary
 is measured by Consecutive Wait (CW) (Gu et al.,
detection model is trained on the source transcrip-
 2017), the number of words between two translate
tion of IWSLT 14 unless otherwise specified (Sec-
 actions. To eliminate the impact of the different
tion 4.3). To evaluate the system performance, we
 policies in simultaneous translation, we only exe-
merge the IWSLT test set of 4 years (2010-2014)
 cute translation at the end of each sentence. There-
to construct a big test set of 7040 sentences. The
 fore, the CW here denotes the sentence length L
overall statistics of our dataset is shown in Table 1.
 plus the number of future words M . We calculate
 We evaluate our model and two existing methods
 its average and maximum value as “avgCW” and
listed below:
 “maxCW”, respectively. Better performance expect
 high F-score, BLEU, and low latency (CW). The
 • dynamic-base is our proposed method that
 translation effect obtained by using the groundtruth
 detect sentence boundaries dynamically using
 period as the sentence segmentation is shown in
 a multi-class classification.
 the first line of Oracle.
 The N-gram method calculate the probability
 • dynamic-force adds a constraint on dynamic-
 of add (padd ) and not add (pnot ) period at each
 base. In order to keep in line with (Wang
 position, and decide whether to chunk by compar-
 et al., 2016), we add a constraint that sentence
 ing whether padd /pnot exceeds θT h . The N-gram
 should be force segmented if longer than θl .
 method without threshold tuning (with θT h = e0.0 )
 divides sentences into small pieces, achieving the
 • N-gram is the method using an N-gram lan-
 lowest average latency of 6.64. However, the F-
 guage model to compare the probability of
 score of segmentation is very low because of the
 adding vs. not adding a boundary at xt af-
 incomplete essence of the n-gram feature. Notable,
 ter receiving xt−N +1 , ..., xt . We implement
 the precision and recall differs much (precision =
 according to (Wang et al., 2016).
 0.33, recall = 0.78) in this setup. Therefore, we
 need to choose a better threshold by grid search
 • T-LSTM uses a RNN-based classification
 (Wang et al., 2016). With θT h equals to e2.0 , the
 model with two classes. We implement a uni-
 F-score of N-gram method increased a little bit
 directional RNN and perform training accord-
 (0.46 → 0.48), with a more balanced precision and
 ing to (Tilk and Alumäe, 2015)5 .
 recall (precision = 0.51, recall = 0.48). How-
 ever, the max latency runs out of control, resulting
 Our classifier in dynamic-base and dynamic-
 in a maximum of 161 words in a sentence. We also
force is trained under ERNIE base framework.
 tried to shorten the latency of the N-gram method
We use the released 6 parameters obtained at pre-
 by force segmentation (Wang et al., 2016), but the
training step as initialization. In the fine-tuning
 result was very poor (precision = 0.33, recall =
stage, we use a learning rate of 2e−5 .
 0.40).
 5
 we only keep the two classes of period and φ in this work The T-LSTM method with the hidden size of
 6
 https://github.com/PaddlePaddle/ERNIE 256 performs better than N-gram, but the F-score

 5
0.75 22.2
 22
 0.7 21.8
 Basic
 21.6 Basic
 Duplicate
 Duplicate
 0.65 Sort 21.4
 F-score Sort

 BLEU
 Synthetic 21.2 Synthetic
 0.6 21
 20.8
 0.55 20.6
 20.4
 0.5 20.2
 200k 400k 600k 800k 200k 400k 600k 800k
 Number of steps Number of steps

 Figure 4: Performance evaluated on IWSLT14 testset for different training sample building strategies.

and BLEU is still limited. On the contrary, our according to alphabetic order.
dynamic-based approaches with M = 1 achieve The performance of the four training data orga-
the best F-score at 0.74 and the final translation is nization methods is shown in Figure 4, all built
very close to the result of Oracle. In particular, the on IWSLT2014 and conducted under the setup of
precision and recall reached about 0.72 and 0.77 in M = 1 and θl = 40. It is clear that Basic, Dupli-
both dynamic-force and dynamic-base, respectively. cate and Synthetic are all involved in the problem
Accurate sentence segmentation brings better per- of over-fitting. They quickly achieved their best
formance in translation, bringing an improvement results and then gradually declined. Surprisingly,
of 1.55 over T-LSTM. Moreover, our approach is the Sort approach is prominent in both segmenta-
not inferior in terms of latency. Both average la- tion accuracy and translation performance. This
tency and max latency is controlled at a relatively may be due to the following reasons: 1) Sentence
low level. classification is not a difficult task, especially when
 It is interesting to note that, dynamic-force per- M = 1 for 3-class classification (y ∈ [φ, 0, −1]),
forms better than dynamic-base, in terms of la- making the task easy to over-fit. 2) Compared with
tency and BLEU. This suggests the effectiveness of Basic, Duplicate is more abundant in the sample
the force segmentation strategy, that is, select the combination in batch training, but there is no es-
chunking location with a sentence length limitation sential difference between the two methods. 3)
will not affect the accuracy of segmentation, and Synthetic hardly profits our model, because the syn-
would enhance the translation effect. thesized data may be very simple due to random
 selection. 4) Sort may simulate difficult cases in
4.2 Magic in Data Processing real scenes and train them pertinently, bringing it
According to Section 3.2, the order between sen- a poor performance at start but not prone to over-
tences of original corpora would affect the gen- fit. There are many samples with identical head
eration of training samples. In this section, we and tail words in the sorted data, such as: “and it
investigate the effect of various data reordering gives me a lot of hope k and ...” and “that means
strategies. there’s literally thousands of new ideas k that ... ”.
 A basic method is to use the original sentence Even human beings find it difficult to determine
order of speech corpora, denote as Basic. However, whether the words before k is sentence boundaries
the samples generated is limited, which makes the of these samples. In Basic, Duplicate and Synthetic
model easy to over-fit. To overcome this problem, methods, such samples are usually submerged in
we adopt two methods to expand data scale: 1) Du- a large quantity of simple samples. However, the
plicate the original data multiple times or 2) Add data organization mode of Sort greatly strengthens
Synthetic adjacent sentences, through randomly se- the model’s ability to learn these difficult samples.
lecting two sentences from the corpora. These two There is no need to worry that the Sort method
methods greatly expand the total amount of data, cannot cover simple samples. Because we sort by
but the gain to the model is uncertain. As an alter- rows in source file, and some of the rows contain
native, we explore a Sort method, to sort sentences multiple sentences (an average of 1.01 sentences

 6
M ethod F-score BLEU avgCW maxCW θl F-score BLEU avgCW
 N-gram 0.48 19.58 15.60 156 10 0.40 16.27 5.85
 T-LSTM 0.56 20.77 15.65 51 20 0.58 20.34 9.74
 dyn-force 0.68 21.48 15.53 40 40 0.74 22.01 14.43
 dyn-base 0.68 21.40 16.08 46 80 0.73 21.60 15.15

Table 3: Segmentation Performance trained on Table 4: Segmentation Performance of dynamic-force
WMT14. All methods are conducted with future words trained on IWSLT2014. All methods are conducted
M equals to 1. N-gram uses grid-search to get the best with future words M equals to 1.
hyperparamters. dyn is short for dynamic and dynamic-
force adopts θl = 40.
 training because our model has been pre-trained.
 Based on a powerful representation, we need only a
per row), which are in real speech order. We ar- small amount of training data in fine-tuning, which
gue that these sentences are sufficient to model the is best aligned with the test set in the domain.
classification of simple samples, based on the rapid
overfit performance of the other three methods. 4.4 Length of window θl
 Next, we discuss the effect of changing θ. The
4.3 Out-of-Domain vs. In-Domain
 performance of dynamic-force with varying θl is
Next, we turn to the question that how does the do- shown in Table 4. Smaller θl brings shorter latency,
main of training corpus affects results. With the test as well as worse performance. The effect is ex-
set unchanged, we compare the sentence boundary tremely poor with θl = 10. There are two possible
detections model trained on out-of-domain corpora reasons: 1) Constraint sentence length less than θl
WMT 14 and in-domain corpora IWSLT 14, re- is too harsh under small θl , 2) The discrepancy be-
spectively. tween the unrestricted training and length-restricted
 As mentioned before, WMT 14 is a larger text testing causes the poor effect.
translation corpus mainly on news and web sources. We first focus on the second possible reason.
But the test set comes from IWSLT, which contains While the difference between dynamic-base and
transcriptions of TED lectures of various directions. dynamic-force is only in prediction, we want to
Intuitively, larger dataset provides more diverse know whether we can achieve better results by con-
samples, but due to domain changes, it does not trolling the length of training samples. Accordingly,
necessarily lead to improvements in accuracy. we only use the samples shorter than a fixed value:
 The performance of various models trained on θl in training phrase. At inference time, we use
WMT14 is shown in Table 3. Dynamic-force also both dynamic-force with the same sentence length
achieves the best translation performance with a constraint θl and dynamic-base to predict sentence
relatively small latency on average and limited the boundaries. As elaborated in Figure 5, For each
max latency within 40 words. However, it under- pair of curves with a same θl , dynamic-force and
performs the same model trained on IWSLT2014 dynamic-base present similar performance. This
(as shown in Table 2), demonstrating its sensitivity demonstrates the main reason for the poor perfor-
to the training domain. mance with small θl is not the training-testing dis-
 On the contrary, N-gram and T-LSTM is hardly crepancy but lies in the first reason that the force
affected. For N-gram, one possible reason is the be- constraint is too harsh.
fore mentioned weakness of the N-gram: segmen- Moreover, it is interesting to find that the per-
tation depends on only N previous words, which formance of θl = 80 is similar with θl = 40 at
is more steady compared to the whole sentence, the beginning but falls a little during training. This
thus eliminating the perturbation of whole sentence probably because the setup with θl = 40 can filter
brought by the domain variation. For T-LSTM, it some inaccurate cases, as the average number of
even improves a little compared with its in-domain words in IWSLT2014 training set is 20.26.
performance. This may be due to the lack of train-
ing samples. 0.19M sentences of IWSLT2014 is 4.5 Number of Future Words M
insufficient to fit the parameters of T-LSTM. Thus We investigate whether can we achieve better per-
the model would benefit from increasing the cor- formance with more or less future words. We ex-
pus size. However, our method needs less data in periment with M from 0 to 5. The result is shown

 7
23
 guage model (Wang et al., 2016) or a classification
 22 model (Sridhar et al., 2013; Yarmohammadi et al.,
 21 2013). The language-model based method make de-
 20 10-Force
 cision depends on N words (xt−N +2 , ..., xt+1 ) and
 compares its probability with (xt−N +2 , ..., xt ,“.”).
 BLEU

 19 20-Force
 40-Force The classification model takes features of N words
 18 80-Force
 10-Base around xt and classifies to two classes denoting xt
 17
 20-Base is a sentence boundary or not. The main deficiency
 16 40-Base
 80-Base
 of this method is that the dependencies outside the
 15 input window are lost, resulting in low accuracy.
 200k 400k 600k 800k
 Number of steps 5.2 Whole sentence-based methods
Figure 5: Translation performance on IWSLT2014 test- Some other work focuses on restoring punctua-
set. “θl -Force” denotes to set the sentence length tion and capitalization using the whole sentence.
threshold to θl in both training sample generation and To improve the sentence boundary classification
prediction. “θl -Base” is to set this constraint only in accuracy, some work upgrade the N-gram input
training samples generation process.
 to variable-length input by using recurrent neural
 network (RNN) (Tilk and Alumäe, 2015; Salloum
 M F-score BLEU avgCW
 et al., 2017). Some other work takes punctua-
 0 0.66 21.54 13.23
 tion restoration as a sequence labeling problem
 1 0.74 22.01 14.43
 and investigates using Conditional Random Fields
 2 0.77 22.23 15.24
 (CRFs) (Lu and Ng, 2010; Wang et al., 2012; Ueff-
 3 0.79 22.23 16.52
 ing et al., 2013). Peitz et al. (2011) and Cho et al.
 4 0.80 22.29 17.15
 (2012) treats this problem as a machine translation
Table 5: Segmentation Performance of dynamic-force task, training to translate non-punctuated transcrip-
trained on IWSLT2014. All methods are conducted tion into punctuated text. However, all these meth-
with θl = 40. ods utilize the whole sentence information, which
 is not fit for the simultaneous translation scenario.
 Moreover, the translation model based methods
in Table 5. Reducing M to zero means that do not
 require multiple steps of decoding, making it un-
refer to any future words in prediction. This de-
 suitable for online prediction.
grades performance a lot, proving the effectiveness
of adding future words in prediction. Increase M 6 Conclusion
from 1 to 2 also promote the performance in both
sentence boundary detection f-score and the sys- In this paper, we propose an online sentence bound-
tem BLEU. However, as more future words added ary detection approach. With the input of streaming
(increase M to 3 and 4), the improvement becomes words, our model predicts the probability of mul-
less obvious. tiple positions rather than a certain position. By
 adding this adjacent position constraint and using
5 Related Work dynamic prediction, our method achieves higher
 accuracy with lower latency.
Sentence boundary detection has been explored for We also incorporate the pre-trained technique,
years, but the majority of these work focuses on ERNIE to implement our classification model. The
offline punctuation restoration, instead of applied empirical results on IWSLT2014 demonstrate that
in simultaneous translation. Existing work can be our approach achieves significant improvements of
divided into two classes according to the model 0.19 F-score on sentence segmentation and 1.55
input. BLEU points compared with the language-model
 based methods.
5.1 N-gram based methods
Some work takes a fixed size of words as input.
Focus on utilizing a limited size of the stream- References
ing input, they predict the probability of putting a Naveen Arivazhagan, Colin Cherry, Wolfgang
boundary at a specific position xt by a N-gram lan- Macherey, Chung-Cheng Chiu, Semih Yavuz,

 8
Ruoming Pang, Wei Li, and Colin Raffel. 2019. Wael Salloum, Gregory Finley, Erik Edwards, Mark
 Monotonic infinite lookback attention for simulta- Miller, and David Suendermann-Oeft. 2017. Deep
 neous machine translation. In Proceedings of the learning for punctuation restoration in medical re-
 57th Annual Meeting of the Association for Compu- ports. In BioNLP 2017.
 tational Linguistics. Association for Computational
 Linguistics. Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas
 Bangalore, Andrej Ljolje, and Rathinavelu Chengal-
Srinivas Bangalore, Vivek Kumar Rangarajan Srid- varayan. 2013. Segmentation strategies for stream-
 har, Prakash Kolan, Ladan Golipour, and Aura ing speech translation. In Proceedings of the 2013
 Jimenez. 2012. Real-time incremental speech-to- Conference of the North American Chapter of the
 speech translation of dialogs. In Proceedings of the Association for Computational Linguistics: Human
 2012 Conference of the North American Chapter of Language Technologies.
 the Association for Computational Linguistics: Hu-
 man Language Technologies. Association for Com- Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi
 putational Linguistics. Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao
 Tian, and Hua Wu. 2019. Ernie: Enhanced rep-
 resentation through knowledge integration. arXiv
Eunah Cho, Jan Niehues, and Alex Waibel. 2012. Seg-
 preprint arXiv:1904.09223.
 mentation and punctuation prediction in speech lan-
 guage translation using a monolingual translation Ottokar Tilk and Tanel Alumäe. 2015. Lstm for punctu-
 system. In International Workshop on Spoken Lan- ation restoration in speech transcripts. In Sixteenth
 guage Translation (IWSLT) 2012. annual conference of the international speech com-
 munication association.
Kyunghyun Cho and Masha Esipova. 2016. Can neu-
 ral machine translation do simultaneous translation? Nicola Ueffing, Maximilian Bisani, and Paul Vozila.
 arXiv preprint arXiv:1606.02012. 2013. Improved models for automatic punctuation
 prediction for spoken and written text. In Inter-
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and speech.
 Kristina Toutanova. 2019. Bert: Pre-training of deep
 bidirectional transformers for language understand- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
 ing. In Proceedings of NAACL-HLT 2019. Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
 Kaiser, and Illia Polosukhin. 2017. Attention is all
Christian Fügen, Alex Waibel, and Muntsin Kolss. you need. In 31st Conference on Neural Information
 2007. Simultaneous translation of lectures and Processing Systems (NIPS 2017).
 speeches. Machine translation, 21(4).
 L. Venuti. 2012. The translation studies reader.
Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Vic- Xiaolin Wang, Andrew Finch, Masao Utiyama, and Ei-
 tor OK Li. 2017. Learning to translate in real-time ichiro Sumita. 2016. An efficient and effective on-
 with neural machine translation. line sentence segmenter for simultaneous interpreta-
 tion. In Proceedings of the 3rd Workshop on Asian
Wei Lu and Hwee Tou Ng. 2010. Better punctuation Translation (WAT2016).
 prediction with dynamic conditional random fields.
 In Proceedings of the 2010 conference on empirical Xuancong Wang, Hwee Tou Ng, and Khe Chai Sim.
 methods in natural language processing. 2012. Dynamic conditional random fields for joint
 sentence boundary and punctuation prediction. In
Mingbo Ma, Liang Huang, Hao Xiong, Kaibo Liu, Thirteenth Annual Conference of the International
 Chuanqiang Zhang, Zhongjun He, Hairong Liu, Speech Communication Association.
 Xing Li, and Haifeng Wang. 2019. STACL: simul-
 taneous translation with integrated anticipation and Mahsa Yarmohammadi, Vivek Kumar Rangarajan Srid-
 controllable latency. In Proceedings of the 57th An- har, Srinivas Bangalore, and Baskaran Sankaran.
 nual Meeting of the Association for Computational 2013. Incremental segmentation and decoding
 Linguistics. strategies for simultaneous translation. In Proceed-
 ings of the Sixth International Joint Conference on
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Natural Language Processing.
 Jing Zhu. 2002. Bleu: a method for automatic eval-
 uation of machine translation. In Proceedings of
 the 40th annual meeting on association for compu-
 tational linguistics. Association for Computational
 Linguistics.

Stephan Peitz, Markus Freitag, Arne Mauser, and Her-
 mann Ney. 2011. Modeling punctuation prediction
 as machine translation. In International Workshop
 on Spoken Language Translation (IWSLT) 2011.

 9
End-to-End Speech Translation with Adversarial Training

 Xuancai Li1 , Kehai Chen2 , Tiejun Zhao1 and Muyun Yang1
 1
 Harbin Institute of Technology, Harbin, China
 2
 National Institute of Information and Communications Technology, Kyoto, Japan
xcli@hit-mtlab.net, khchen@nict.go.jp, {tjzhao,yangmuyun}@hit.edu.cn

 Abstract tasks (Duong et al., 2016; Bérard et al., 2016,
 2018).
 End-to-end speech translation usually lever- However, due to the artificial cost of collecting
 ages audio-to-text parallel data to train an audio-to-text parallel data, speech translation is a
 available speech translation model which has natural low-resource translation scenario, which
 shown impressive results on various speech
 translation tasks. Due to the artificial cost
 greatly hinders its improvement. Actually, the
 of collecting audio-to-text parallel data, the audio-to-text parallel data has only tens to hundreds
 speech translation is a natural low-resource of hours which are equivalent to about hundreds of
 translation scenario, which greatly hinders its thousands of bilingual sentence pairs. Thus, it is
 improvement. In this paper, we proposed a far from enough for the training of a high-quality
 new adversarial training method to leverage speech translation system compare to bilingual
 target monolingual data to relieve the low- parallel data of millions or even tens of millions for
 resource shortcoming of speech translation. In
 training a high-quality text-only NMT. Recently,
 our method, the existing speech translation
 model is considered as a Generator to gain there have some recent works that explore to
 a target language output, and another neural address this issue. Bansal et al. (2018) pre-trained
 Discriminator is used to guide the distinction an ASR model on high-resource data, and then fine-
 between outputs of speech translation model tuned the ASR model for low-resource scenarios.
 and true target monolingual sentences. Ex- Weiss et al. (2017) and Anastasopoulos and Chiang
 perimental results on the CCMT 2019-BSTC (2018) proposed multi-task learning methods to
 dataset speech translation task demonstrate train the ST model with ASR, ST, and NMT
 that the proposed methods can significantly
 improve the performance of the end-to-end
 tasks simultaneously. Liu et al. (2019) proposed a
 speech translation. Knowledge Distillation approach which utilizes a
 text-only MT model to guide the ST model because
 1 Introduction there is a huge performance gap between end-to-
 end ST and MT model. Despite their success, these
 Typically, a traditional speech translation (ST) approaches still need additional labeled data, such
 system usually consists of two components: an as the source language speech, source language
 automatic speech recognition (ASR) model and transcript, and target language translation.
 a machine translation (MT) model. Firstly, In this paper, we proposed a new adversarial
 the speech recognition module transcribes the training method to leverage target monolingual
 source language speech into the source language data to relieve the low-resource shortcoming of
 utterances (Chan et al., 2016; Chiu et al., end-to-end speech translation. The proposed
 2018). Secondly, the machine translation module method consists of a generator model and a
 translates the source language utterances into the discriminator model. Specifically, the existing
 target language utterances (Bahdanau et al., 2014). speech translation model is considered as a
 Due to the success of end-to-end approaches in Generator to gain a target language output, and
 both automatic speech recognition and machine another neural Discriminator is used to guide the
 translation, researchers are increasingly interested distinction between outputs of speech translation
 in end-to-end speech translation. And, it has shown model and true target monolingual sentences. In
 impressive results on various speech translation particular, the Generator and the Discriminator

 10
 Proceedings of the 1st Workshop on Automatic Simultaneous Translation, pages 10–14
 July 10, 2020. c 2020 Association for Computational Linguistics
Generator loss Discriminator loss
 ST Loss
 Low Quality Score High Quality Score

 Quality Score Discriminator
 ST model ST Discriminator
 output
 ST Real
 Speech output text

 ST model training step Discriminator training step

 Figure 1: Proposed end-to-end speech translation with adversarial training

are trained iteratively to challenge and learn into a high level representation H = (h1 , h2 , · · · ,
from each other step by step to gain a better hn ), where n ≤ t. In the pBLSTM, the outputs
speech translation model. Experimental results of two adjacent time steps of the current layer are
on CCMT 2019-BSTC dataset speech translation concatenated and passed to the next layer.
task demonstrate that the proposed methods can
significantly improve the performance of the end- hij = pBLSTM(hij−1 , [hi−1 i−1
 2j , h2j+1 ]). (1)
to-end speech translation system.
 Also, the pBLSTM can reduce the length of the
2 Proposed Method encoder input from t to n. In our experiment, we
 stack 3 layers of the pBLSTM, so we were able
The framework for the method of adversarial to reduce the time step 8 times. The decoder is
training consists of a generator and a discriminator. an attention-based LSTM, and it is a word-level
In this paper, Generator is the existing end-to-end decoder.
ST model, which is based on the encoder-decoder
model with an attention mechanism (Bérard et al., ci = Attention(si , h),
2016). The discriminator is a model based on a si = LSTM(si−1 , ci−1 , yi−1 ), (2)
convolutional neural network, and the output is yi = Generate(si , ci ),
a quality score. The discriminator is aiming to
get higher quality scores for real text and lower where the Attention function is a location-aware
quality scores for the output of the ST model in attention mechanism (Chorowski et al., 2015), and
the discriminator training step. In other words, the the Generate function is a feed-forward network
discriminator is expected to distinguish the input to compute a score for each symbol in target
text as much as possible. Meanwhile, our method vocabulary.
can not only leverage the ground truth to supervise
the training of ST model,but also make use of 2.2 Discriminator
the discriminator to enhance the output of the ST Discriminator takes either real text or ST
model by using target monolingual data, as shown translations as input and outputs a scalar QS
in Figure 1. as the quality score. For the discriminator,
 we use a traditional convolution neural network
2.1 Generator (CNN) (Kalchbrenner et al., 2016) which focuses
For the end-to-end speech translate, we chose an on capturing local repeating features and has
encoder-decoder model with attention. It takes as a better computational efficiency than recurrent
an input sequence of audio features x = (x1 , x2 , · · · , neural network (RNN) (LeCun et al., 2015). The
xt ) and a output sequence of words y = (y1 , y2 , · · · , real text of the target language is encoded as
ym ). The speech encoder is a pyramid bidirectional a sequence of one-hot vectors y = (y1 , y2 , · · · ,
long short term memory (pBLSTM) (Chan et al., ym ), and the output generated by the ST model is
2016; Hochreiter and Schmidhuber, 1997). It denoted as a sequence of vectors ye = (ey1 , ye2 , · · · ,
transforms the speech feature x = (x1 , x2 , · · · , xt ) yen ). The sequence of vectors y or ye are given as

 11
input to a single layer neural network. The output Algorithm 1 Adversarial Training
of the neural network is fed into a stack of two one- Require: G, the Generator; D, the Discriminator;
dimensional CNN layers and an average pooling dataset(X,Y), speech translation parallel
layer. Then we use a linear layer to get the quality corpus.
score. Training the discriminator is easy to overfit 0
 Ensure: G , generator after adversarial training.
because the probability distribution for ST model 1: for iteration of adversarial training do
output is different from the one-hot encoding of the 2: for iteration of training G do
real text. To address this problem, we used earth- 3: Sample a subset(Xbatch ,Ybatch ) from
mover distance in WGAN (Martin Arjovsky and dataset(X,Y)
Bottou, 2017) to estimate the distance between the 4:
 0
 Ybatch =G(batch )
ST model output and real text. The loss function 5: Use Eq.4 as loss function and compute
of the discriminator is the standard WGAN loss, the loss
and adds a gradient penalty(Gulrajani et al., 2017). 6: Update parameters of G with optimiza-
Formally, the loss function of the discriminator as tion algorithm
below: 7: end for
 8: for iteration of training Discriminator D do
 LossD = λ1 {Eye∼Pst [D(e
 y )] − Ey∼Preal [D(y)]} 9: Sample a subset(Xbatch ,Ybatch ) from
 dataset(X,Y)
 + λ2 Eŷ∼Pŷ [(5ŷ ||D(ŷ)|| − 1)2 ], (3) 0
 10: Ybatch =G(batch )
 0
 11: Let Ybatch as y , Ybatch as ye, use Eq.3 as
where λ1 and λ2 are hyper-parameter, Pst is loss function and compute the loss
the distribution of ST model ye and Preal is the 12: Update parameters of D with optimiza-
distribution of real text y, D(y) is the quality score tion algorithm
for y given by discriminator, ŷ are samples generate 13: end for
by randomly interpolating between ye and y. 14: end for

2.3 Adversarial Training
 3 Experiment
 Both the ST model and the discriminator are
trained iteratively from scratch. For the ST model 3.1 Data Set
training step, the parameters of discriminator are We conduct experiments on CCMT 2019-
fixed. We train the ST model by minimizing the BSTC (Yang et al., 2019) which is collected from
sequence loss LossST which is the cross-entropy the Chinese mandarin talks and reports as shown
between the ground truth and output of the ST in Table 1. It contains 50 hours of real speeches,
model. And at the same time, the discriminator including three parts, the audio files in Chinese,
generates a quality score QS for the output of the the transcripts, and the English translations. We
ST model. Formally, the final loss function in the keep the original data partitions of the data set
training process is as follows, and segmented the long conversations used for
 simultaneous interpretation into short utterances.
 LossG = λst LossST − (1 − λst )QS, (4)
 Dataset Utterances Hours
 Train 28239 41.4
where λst ∈ [0,1] is hyper-parameter. For the
 Valid 956 1.3
discriminator training step, the parameters of ST
 Test 569 1.5
model are fixed. The discriminator uses the
probability distribution of the ST model output and Table 1: Size of the CCMT 2019-BSTC.
the real text for training. The specific learning
process is shown in Algorithm 1. Note that the
discriminator is only used in the training of the 3.2 Experimental Settings
model while it is not used during the decoding. We process Speech files, to extract 40-dimensional
Once the training ends, the ST model implicitly Filter bank features with a step size of 10ms and
utilizes the translation knowledge learned from window size of 25ms. To shorten the training
discriminator to decode the input audio. time, we ignored the utterances in the corpus

 12
that were longer than 30 seconds. We lowercase the best results. Our analysis is that this model uses
and tokenize all English text, and normalize the a larger scale of speech corpus for pre-training,
punctuation. a word-level vocabulary of size 17k thus introducing more information into the model.
is used for target language in English. Then the We can see that the Adversarial Training method
text data are represented by sequences of 1700- can obtain 19.1 BLEU, which is an improvement
dimensional one-hot vectors. Our ST model uses 3 of 1.4 BLEU over the end-to-end baseline model,
layers of pBLSTM with 256 units per direction as and even better than the multitask method. The
the encoder, and 512-dimensional location-aware multitasking approach uses transcription of source
attention was used in the attention layer. The language speech, and our proposed approach is
decoder was a 2 layers LSTM with 512 units and 2 superior to it without using other information.
layers neural network with 512 units to predict
words in the vocabulary. For the discriminator Model ST
model, we use a linear layer with 128 units at the pipeline 19.4
bottom of the model. Then, using 2 layers one- end-to-end 17.7
dimensional CNN, from bottom to top, the window pre-trained 20.4
size is 2, the stride is 1, and the window size is multitask 18.9
3, the stride is 1. Adam (Kingma and Ba, 2014) Adversarial Training 19.1
was used as the optimization function to train our
model, which has a learning rate of 0.0001 and a Table 2: BLEU scores of the speech translation
 experiments
mini-batch size of 8. The hyper-parameters λst , λ1
and λ2 are 0.5, 0.0001 and 10 respectively. And
the train frequency of the ST model is 5 times then
the discriminator. 4 Conclusion
 We used the BLEU (Papineni et al., 2002) metric In this paper, we present the Adversarial Training
to evaluate our ST models. We try five settings on approach to improve the end-to-end speech
Speech Translation. The Pipeline model cascades translation model. We applied GAN to the speech
an ASR and an MT model. For the ASR model, translation task and achieved good results in the
we use an end-to-end speech recognition model experimental results. Since GAN’s structure is
similar to LAS and trained on CCMT 2019-BSTC. used, this method can be applied to any end-to-
For the MT model, we use open source toolkit end speech translation model. Unlike the multitask,
OpenNMT (Klein et al., 2017) to train an NMT pre-trained, and knowledge distillation previously
model. The end-to-end model (described in section proposed, this method requires the use of additional
2) does not make any use of source language parallel corpus, which is very expensive to collect.
transcripts. The pre-trained model is the same as In the future, we will experiment with unpaired text
the end-to-end model, but its encoder is initialized in order to be able to use this method to utilize an
with a pre-trained ASR model. And the pre- infinite amount of spoken text.
trained ASR model is trained using Aishell (Bu
et al., 2017), a 178 hours Chinese Mandarin speech
corpus, which has the same language as our chosen References
speech translation corpus. The multitask model is a
one-to-many method, where the ASR and ST tasks Antonios Anastasopoulos and David Chiang. 2018.
share an encoder. The Adversarial Training is the Tied multitask learning for neural speech translation.
 arXiv preprint arXiv:1802.06655.
approach proposed in this paper.
 Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua
3.3 Results Bengio. 2014. Neural machine translation by jointly
 learning to align and translate. arXiv preprint
Table 2 shows the result of the different models arXiv:1409.0473.
on the validation set of CCMT 2019-BSTC. From
this result, we can find that the end-to-end methods Sameer Bansal, Herman Kamper, Karen Livescu,
 Adam Lopez, and Sharon Goldwater. 2018. Pre-
including pre-trained, multitask and Adversarial training on high-resource speech recognition
Training all get results comparable to the Pipeline improves low-resource speech-to-text translation.
model. Among them, the pre-trained model gets arXiv preprint arXiv:1809.01431.

 13
You can also read