Taking Stock of Written Retrospective Protocols Used in Translator Education

Page created by Steve Bryant
 
CONTINUE READING
Journal of Translation Studies vol. 02/2021, pp. 79–102
                                                © 2021 Rui Li - DOI https://doi.org/10.3726/JTS022021.5

rui li
Graduate Institute of Interpretation and Translation
Shanghai International Studies University
lz_lxf@sohu.com

     Taking Stock of Written Retrospective Protocols
     Used in Translator Education

Abstract
Of all the online and offline methods for probing into the translation processes of student
translators, written retrospective protocols are reportedly the earliest, most widely and
easily administered didactic and assessment tool used in and outside classrooms. Despite
their recorded advantages, a close examination of both English and Chinese literature re-
veals a plethora of approaches to their implementation. They differ with respect to factors
that include, but are not limited to, the name, contents, nature and number of problems
covered, writing guidelines, language of writing, time and frequency of writing, theoretical
components, meta-language and theories used, assessors, assessment rubrics, provision
and training, uses and follow-ups. Although these differences may be only a matter of
trainers’ personal preferences that suit particular settings, they do have important didactic
implications. This paper, therefore, sets out to capture such diversity, with a view to esta-
blishing a framework of reference to inform better use of this instrument of intervention
in translator education.

Keywords
written retrospective protocols, differences, assessment

Journal of Translation Studies vol. / 2021 - This work is licensed under a Creative Commons
CC-BY 4.0 license. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
80                                                                              Rui Li

1. Introduction

With rapid technological and methodological advancements, researchers
now have an array of tools at their disposal to probe into the cognitive
processes of trainee translators. Krings (2005) divides the most common
translation process research tools into online and offline categories (see
Figure 1).

Fig. 1: Methods for data analysis (Krings 2005: 348, translated by Helle Dam-Jensen
and Carmen Heine)

Among these methods, written retrospective protocols are reportedly the
earliest, most widely and easily administered offline didactic and assess-
ment tool. As a form of reflection and expression (Schön 1987; Kolb 1984;
Moon 1999; Boud 2001), there is a wide range of benefits associated with
this tool. For instance, on the student’s side, keeping a translation diary is
learner-centred and needs-based, and gives them motivation by fostering
self-directed learning, self-efficacy and learner autonomy (Fox 2000).
It helps students recognize problems, mitigate errors and foster their
Taking Stock of Written Retrospective Protocols Used in Translator Education   81

metacognition (Angelone 2015). A record of the choices can help translators
evaluate and justify their strategies and choices later on (Orlando 2012).
From the teacher’s perspective, written retrospective protocols can help
to pinpoint the reason for translation mistakes (Gile 2004) and give more
individualized feedback (Fox 2000).
      While these advantages are beyond dispute, a close reading of both
English and Chinese literature has revealed a diversity of approaches to
design. These differences generally lie along the lines of “Who write(s)?”,
“What do they write?”, “How do they write?”, “When do they write?”, “Who
assess(es)?” and “How are students assessed?”. Each account of anecdotal
experience arises out of a particular setting. If we put them together, the
differences may carry important didactic implications for both students
and trainers.
      At the same time, we cannot fail to notice a strong gravitation away
from think-aloud and written protocols towards keystroking logging,
screen recording and eye-tracking as the preferred process data elicitation
methods (e.g. Göpferich 2009; Massey and Ehrensberger-Dow 2011, 2013;
PACTE 2017; Pym 2009). There have already been experimental attempts
to compare the strengths and weaknesses of different types of tools (e.g.
Hansen 2006; Angelone 2015). For written retrospective protocols to stay
relevant, our intention in this paper is to map out various practices in order
to provide a menu of references for translation trainers.

2. Sticking points in the use of written retrospective protocols

Before delving into detailed discussions, the following chart summarizes
where we believe major differences lie (see Figure 2).
82                                                                    Rui Li

Fig. 2: Parameters in the design of written retrospective protocols

2.1 Who write(s)?

Like think-aloud protocols, retrospective protocols can be written by indi-
viduals, in dyads or a group of students. Whoever has produced a report
has to be held accountable in assessment. It is pair and group reports that
are didactically challenging.
Taking Stock of Written Retrospective Protocols Used in Translator Education   83

      Firstly, we cannot tell for certain how members in a group interact with
each other unless they specify in reports (Robinson et al. 2017). Even so,
trainers need to view a report with caution. In a group assignment, there is
a cline for students to work cooperatively to collaboratively (Kenny 2008;
Thelen 2016). A collective report might range from an amalgamation of
individual reflections to one produced through negotiation and consensus.
It makes more sense if the students could record the ways disagreements are
ironed out. The caveat is that writing in this way takes longer and is tricky
when it comes to assessment of individual contributions and performance
(Kelly 2005).
      There are also questions of who should do the writing itself and who
is the group leader. Do teachers leave such matters in students’ hands or
impose them personally? Compared with a strong student, the weaker stu-
dent may be more motivated to learn if s/he is given more responsibility
and if his/her contributions are not easily dismissed in group negotiation
(Lee-Jahnke 2005).
      Finally, if the tool is used throughout the semester, trainers should make
an effort to change grouping (Robinson et al. 2008). Random grouping has
the benefits of making students exchange ideas with more people and of
building their generic competence. Most importantly, given that, in the real
market, professional translators work with different partners all the time,
students should become aware of this and be trained early on to expect this.

2.2 Frequency of writing

Trainers often have to take the frequency of writing into account when
planning the syllabus. Fox (2000) asks her students to write reports on five
translation tasks over a period of 11 weeks. Norberg (2014) compares the
quality of students’ reflections under different guideline instructions over
two consecutive semesters, with four in the first semester and seven in the
second. In both cases, students are required to write a report every two
weeks.
       We believe the number of protocols written per semester is subject,
first, to how many translation tasks and projects students complete in a
semester and how complicated and complex they are. Second, the number
84                                                                        Rui Li

of writings depends on students’ competence levels. Beginners should be
expected to write more often than more advanced students (García Álvarez
2007). Third, the frequency of writing also has to do with how often students
meet in a week. In China, a semester usually spans 18 weeks. The norm is
for a teaching session to last around 90 minutes and, for a core translation
course, teachers meet students at least twice a week at both undergraduate
and master levels. If we take up the same frequency of writing applied by
Fox and Norberg, the amount of work generated will increase drastically,
both for students and teachers.
       At the same time, every course requires commitment. Given the pro-
pensity for incorporating journal writing into learning across the board, and
if all trainers ask students to keep a journal, both sides will soon lose their
interest and motivation. Thus, there is a compelling need to coordinate all
courses in the curricula, not only to seek interconnectedness of skills to
learn, but also to create a relaxing time frame for students to write down in-
depth reflections and for teachers to read and evaluate the quality of a report.

2.3 Time of writing

If students are asked to write retrospective protocols only once in a single
assignment, then it is important to specify at which point this should be
done. Gile views the Integrated Problem and Decision Reporting (IPDR)
that he used as an offline task (2004). However, both Hansen (2006) and
Dam‐Jensen and Heine (2009) hold that, depending on the way in which
it is used, IPDR can also be applied as an online method. In other words,
students can choose to write in parallel to every act of problem solving, to
write immediate retrospective comments after having finished the first draft
or, yet again, after having completed the final target text. In an empirical
study, however, Angelone finds that a student’s train of thought might be
disrupted if s/he has to write in parallel (2015). In comparison, the integra-
ted translator’s diary (ITD) proposed by Orlando (2011, 2012) is not written
until students have completed three drafts and a final copy.
       On the other hand, retrospective protocols can also be written at mul-
tiple points. According to Boud, “it is useful to consider these occasions of
reflection: in anticipation of events, in the midst of action, and after events”
Taking Stock of Written Retrospective Protocols Used in Translator Education   85

(2001: 3). This viewpoint is echoed by Lee-Jahnke (2011), who argues
that formative assessment can best be practiced in three phases: a) before,
b) during and c) after translation. In real practice, for instance, Bergen (2009)
asks his students to write three times during a single task, first a short questi-
onnaire on their conceptions; then after students receive feedback from their
teachers and, finally, after the assignment is over, to write about points to
concentrate on in the future. Lee (2015) asks her students to write once for
each assignment and, at the end of the semester, to write on all assignments
ever taken during a semester, in a way much like a portfolio summary and
assessment. Fernández and Zabalbeascoa (2012b) require their students to
write a pre-translation questionnaire on the brief provided but without them
having seen the source text, and a post-translation questionnaire after each
translation, as well as one at the end of each three-week module.
      From these accounts, it is easy to see that “retrospective writing” is not
at all synonymous and interchangeable with “delayed writing”. Sometimes
writing can be “prospective” and “in parallel” and, even if the reporting is
made after a translation is completed, there is a vast degree of difference
as to how far back students begin to write the protocols. Trainers, therefo-
re, still have ample room to explore to determine the most effective entry
point(s) of intervention.

2.4 In what language to write?

What is the language to be used in written protocols: the source or the
target language? L1 irrespective of translation direction or L2 irrespective
of translation directionality? On the face of it, the language requirement
may seem a trivial matter of concern but there are discrepancies in practice.
      Both Orlando (2012) and Fox (2000) ask students to write in the target
language. Shih (2018) and Shei (2005a, 2005b) point out that, in the United
Kingdom, translation commentaries should be written in English irrespec-
tive of translation direction. Norberg (2014) asks students to translate into
Swedish and to write in Swedish. Lee borrows Fox’s approach and also
requires her students, who translate from Korean into English, to write in
English, but she later concedes that “in hindsight, the choice of writing in
either students’ L1 or L2 should be encouraged, as this could facilitate the
86                                                                                 Rui Li

task, allowing maximum room for exploration, discovery and reflection”
(2015: 502). Most Chinese trainers, including Cheng and Wu (2016), Wu
(2014), Li and Ke (2013a, 2013b), and Li (2009) ask students to write in
Chinese irrespective of translation direction.
      Written protocols often comprise source text (ST) analysis, justifica-
tions for the solutions and strategies taken, and target text analysis (Presas
2012). In this respect, we believe that writing in the source language is
preferable for ST analysis, whereas writing in the target language is ideal
for students analysing the idiomaticity of the target text and how it meets
the audience’s expectations. Nevertheless, reflection is certainly more
effectively supported by a language in which students express themselves
with the greatest ease, which is always L1. Therefore, deciding on which
component of the protocols counts most, and on which language to write
in, is a choice that trainers have to make carefully. It is worth carrying out
systematic research, either in the form of a larger-scale survey or through
classroom action research, to gauge how students perceive the benefits of
writing in a particular language.
      On a related note, in China, various Masters of Translation and Inter-
preting (MTI) programs have put in place different language requirements
for students writing their commentaries as a graduation thesis. If we simply
focus on China’s top five MTI programs by way of comparison1, all of which
are CIUTI members (Conférence internationale permanente d’Instituts
universitaires de Traducteurs et Interprètes), both SISU and BFSU request
students to write in Chinese, irrespective of the translation direction, whe-
reas in GDUFS, BISU and BLCU, students have to write in English. Such
a discrepancy is telling of a larger problem, and it signals the need for some
standardization of practice2.

1 The five MTI programs which have joined CIUTI are BFSU (Beijing Foreign Studies
  University), SISU (Shanghai International Studies University), GDUFS (Guangdong
  University of Foreign Studies), BLCU (Beijing Language and Culture University) and
  BISU (Beijing International Studies University).
2 Here we argue for standardization in this particular category of written protocols used
  for a graduation thesis. If they are written for class assignments, then trainers should
  be encouraged to seek a different approach.
Taking Stock of Written Retrospective Protocols Used in Translator Education   87

2.5 Number of problems covered

Translation is a problem-solving and decision-making process. Gile expects
his students to report all problems in IPDR (2004, emphasis added by the
author). Similarly, Galán-Mañas and Hurtado Albir (2015) believe that
this approach could help a teacher distinguish between students who have
detected a translation problem, even if they have been unable to resolve it,
and those who have simply failed to notice it.
      Yet empirical studies have revealed that IPDR is only useful in showing
problems that translators consider significant (Hansen 2006). Students are
not sure how much was too much when it came to documenting content in
a log. They might become bogged down by the need to document each and
every little problem (Angelone 2015). Trainers interviewed by Shih (2018)
also claim that students should not include too many translation problems,
as this might result in the superficial treatment of each of the problems.
They believe that it is not the number of problems that counts, but how
representative they are.
      Therefore, for trainers, the choice lies between asking students to
document every problem for the sake of completeness, and asking them to
report the problems that they find truly challenging.

2.6 Nature of problems covered

Compared with other types of data collection tools, problems reported in
written protocols are not only different in number but are also different in
nature.
     Starting with think-aloud protocols and written protocols, García
Álvarez (2007) points out that, compared with rambling and disjointed
think-aloud protocols produced on the spur of the moment, a written record
of the translation process seems to afford more opportunity for recall. In
sharp contrast, Göpferich and Jääskeläinen (2009) argue that, as retros-
pective protocols can only record what the subject regards as relevant or
is motivated to write down, the logs may prove to be very incomplete and
carry more didactic value than experimental value.
88                                                                       Rui Li

      Drawing on real experiments, Hansen (2006) finds that students tend
to record problems related with document information acquisition more
often than other types of problems in IPDR. Angelone (2015) compares the
types and number of errors caught with IPDR, with think-aloud protocols
and screen recordings. He notes that, when using IPDR, the student tends
to catch syntactic and stylistic errors and miss punctuation and spelling
problems, whereas screen recordings are most effective in overall problem
recognition.
      Here we see very divergent views of the effectiveness of written retros-
pective protocols, which we believe can only be attributed to researcher’s
personal beliefs and experience. Our view is that the value of this tool lies
not only in recording cognitive difficulties, but also in documenting social
cognitive problems experienced by students. For instance, Kelly argues that
students need to report on the functioning of the team, and how work was
shared out (2015). Galán-Mañas and Hurtado Albir make it clear to their
students that they should include in a group project report the distribution of
workload between the various participants, as well as an appraisal of their
teamwork (2015). Massey (2016) relates action research in which the stu-
dents are asked to report on points of focus, sources, modes and (perceived)
usefulness of feedback. Collaborative information of this nature would not
be attainable with the use of keystroking logging software and eye tracking
devices, which are usually installed on individual screens and used under
experimental conditions. Therefore, depending on the research foci and
agenda, different process data elicitation tools can never completely replace
each other but always work in complementarity.

2.7 Is metalanguage necessary? Are theories necessary?

The importance of metalanguage and translation theories must be recog-
nized for translation teaching. Delisle (1998), Adab (2000) and Lee-Jahn-
ke (2011) are among the early pioneers stressing the need to write using
appropriate metalanguage to describe translation problems and to instil an
awareness in students of how and why they arrive at a product.
     Yet starting from the very names that trainers give to retrospective
reports, we see differences. These include diary (Fox 2000); annotations
Taking Stock of Written Retrospective Protocols Used in Translator Education        89

(Adab 2000; Almanna 2016); metacognition questionnaires (Fernández
and Zabalbeascoa 2012a, 2012b); commentary (García Álvarez 2007; Shih
2011, 2018; Hurtado Albir and Olalla-Soler 2016); commented translation
(Presas 2012); Integrated Problem and Decision Reporting (IPDR) (Gile
2004); Integrated Translator’s Diary (ITD) (Orlando 2012); guided com-
mentary (Norberg 2014); logs (Kelly 2005; Angelone 2015); semi-structured
worksheet3 (Lee-Jahnke 2005); and translation report (Hurtado Albir 2015;
Galán-Mañas and Hurtado Albir 2015).
     The situation was made even more problematic when many Chinese
trainers began to use this tool in class. Here is a list of the dizzying array
of Chinese names: 翻译述评 (Chen and Zhang 2011); 评注式翻译 (Ke and
Li 2012; Li and Ke 2013a, 2013b); 译者注 (Li 2009) ;翻译日志 (Wu 2014;
Cheng and Wu 2016); 翻译实践报告 (Mu et al. 2012; Sun and Ren 2019;
Li C. 2021); 评注 (Qiao 2016); 翻译评述 (Huang 2018).
     As Muñoz Martín (2017) puts it:
     We need to agree on the use of basic terminology, including the name of the
     field and of contending frameworks. Names are not innocent in that they sup-
     port certain view or shift boundaries to certain limits. (567)

To our knowledge, Shih (2018) is the only scholar to differentiate in English
between “commented translation”, “footnotes”, “annotated text for trans-
lation” and “translation annotation”. She calls for the need to find a more
uniform use of terms, or at least to clarify the use of relevant terms so as
to avoid confusion.
      In the same vein, the usefulness of theory to translation protocol wri-
ting has also been a moot point. We see no mention at all of theories in some
anecdotal accounts, and a strong preference for theoretical discussions in
others. Trainers themselves should draw a distinction between metalanguage
and theories. As is suggested by Shih (2011) and Sun and Ren (2019), trainers
should be aware, if they require theories to be included in written protocols,
of a tendency among some students to use certain theory as the go-to option,
irrespective of whether it is actually suitable for the translation tackled.

3 The original German term that Lee-Jahnke uses for the translation report is “Lasten-
  heft zur Übersetzung des Textes”.
90                                                                    Rui Li

2.8 Are writing guidelines and training needed?

Differences also exist in respect of whether guidelines and training should
be offered. Guidelines give a specification to students of what must be
included in their written protocols and in what order. In both Gile’s IPDR
(2004) and Orlando’s ITD (2012), neither cues nor models are provided for
writing. Students are encouraged to give personalized accounts. In contrast,
some trainers show a strong preference for guidelines. Sewell (2002) pro-
poses 13 theoretical approaches. García Álvarez (2008) lists 18 guidelines
but she also concedes that students should learn to apply these guidelines
in a flexible and dynamic way. In between these two extremes, translation
questionnaires (Fernández and Zabalbeascoa 2012a, 2012b; Norberg 2014)
and the worksheet used by Lee-Jahnke (2011, 2015) have provided cues,
prompts and pointers for students to consider when reporting.
      Norberg (2014) believes that the way in which guidelines are formula-
ted can determine which aspects students will discuss. He tracks the quality
of the same student cohorts over the course of two consecutive semesters,
under different guideline requirements, and finds that his students develop
better metacognition and argumentation skills with more detailed instruc-
tions. However, we should also be aware that the increased protocol quality
could also be attributed to the improvement of students’ general translation
competence rather than to the sophistication of guidelines.
      Some trainers provide advanced training to students before asking
them to engage in protocol writing. According to Orlando (2012), in his
school, not only students and new instructors receive specific instruction
at the beginning of each semester on the use of assessment grids and ITD;
staff meetings are also organized to consider and discuss the tools and the
benefits of such practice.
      To sum up, guidelines and training should be provided to lead students
on the right track but, at the same time, this should not be too imposing to
put protocol reporting in a straitjacket. If we want truthful reflections to
occur, students should be given room to express their thoughts and decisi-
on-making processes freely.
Taking Stock of Written Retrospective Protocols Used in Translator Education   91

2.9 Should there be a time limit and word limit?

In terms of time limits, García Álvarez (2007) introduces a limit on reporting
of one hour, and asks students to be concise in explaining their strategies,
though she advised trainers to make adjustments according to actual situ-
ations. Hurtado Albir (2015) asks students to specify the time they spent
on translating in addition to writing on problems, documentary resources
and reasons behind their decisions. Lee-Jahnke (2005) is particularly de-
tail-oriented and asks students to write down their translation preparation
time and real time, their estimated translation time and real time, as well
as their estimated report writing time and real time.
      Only a couple of researchers have broached the issue of a word limit.
Norberg (2014) indicates that the reports his students write have to be appro-
ximately 300 words in length in the first semester and 600 words long in the
second semester. Lee (2015) askes her students to write a minimum of 500
words. China’s Master of Translation (MT) graduation thesis, as required by
the national MTI program regulator, must be at least 5000 words in length.
      It is necessary for trainers to time students’ average translating speed
while they are still in training, because “quality”, as defined in the pro-
fessional world, not only means a good product, fit for purpose, but also
implies translators not missing any deadline (Angelone and Marín García
2017). It is also important to measure their reporting time, so as to build an
awareness of efficiency; otherwise, students may leave protocol reporting
to the last minute, risking the failure of truthful recall.
      However, we beg to disagree on the use of a word limit. The total word
count requirement is understandable for a graduation thesis. However, for
reports on class translation assignments, the quality of reporting does not
necessarily correlate with the number of words written. If students perceive
translating and reporting as a truly beneficial learning experience, they will
have a natural urge to write down their reflections, but if they do not learn
anything and are not engaged at all, the imposition of a quota will only
make a drudgery of the job.
92                                                                       Rui Li

2.10 Who assess(es)?

When students submit their written protocols, who will provide feedback:
the trainer alone, fellow classmates, or both? Regarding the sequence in
which the protocols are to be read, should they first be read by the trainer
and then by fellow classmates, or in reverse order?
      Conventionally, translation trainers are the first and final readers and
assessors of written protocols. Both Gile’s IPDR and Orlando’s ITD fall into
this category. However, translation trainers should not be the sole arbiters
or the only ones to summarize good and bad solutions. Fox reads students’
diaries herself, she then also distributes the diaries and asks students to
comment publicly on the process of translation in peer conferencing, which
“reinforces their awareness through in-class discussion” (2000: 126).
      We believe retrospective protocols should involve peer-assessment
and peer-feedback, in addition to teacher assessment. The input provided
by fellow students, who have done the same translation, may produce al-
ternative perspectives and further room for negotiation of meaning. Wang
and Han (2013) conduct a study on how their students perceive the benefits
of being 1) a feedback provider, 2) a feedback recipient and 3) a peruser of
other students’ work. They find that the students do appreciate peer feedback
as a valuable activity that aids their own learning. If we take the argument
further, it is worth carrying out research to see how students perceive the
benefits of assessing and providing feedback on their fellow students’ written
protocols. Rather than trainers being the sole ones to provide comments,
students could also be tasked with summarizing what they believe are the
best solutions, typical mistakes and errors, and could present their findings
to class.

2.11 How are written retrospective protocols assessed?

With regard to the utilized assessment rubrics, it is important to ask whether
retrospective protocols are assessed separately from, or together with, the
translation. In either case, trainers should think about what criteria are used
and what share of the final mark awarded to students is devoted to reflection
and composition.
Taking Stock of Written Retrospective Protocols Used in Translator Education          93

      If translation protocols are assessed on their own merit as an academic
argumentative composition written by a learner of both the target language
and translation, then the criteria should include, but not be limited to, those
relating to students’ command of the language in which the reporting is
done, problems correctly identified and solved, the logic of reasoning and
justification, depth of reflection and general academic writing skills. Based
on Shih’s (2018) interview of ten UK university translation trainers, she
summarizes the marking criteria in a table (see Table 1).

Tab. 1: Summary of marking criteria based on Shih’s interview of ten UK trainers of
postgraduate translation programs (2018: 306)

 Categories of marking criteria           Marking criteria
 Essay-related criteria                   ciarity and consistency
                                          critical analytic ability
                                          acknowledgment of references
                                          use of theories/reading
 Translation commentary-specific criteria Analysis on ST intension and TT readership
                                          sensitivity of cultural transfer
                                          ability of justify solutions to problems
                                          awareness/consideration of problems
                                          formation of overall translation strategy
                                          Specific translation strategy

It is worth pondering whether those who produce good commentary also
produce good translations. We have read one study that confirms such a
correlation (Fernández and Zabalbeascoa 2012a), but there may be excep-
tions, because the criteria used in judging the quality of a translation are,
after all, different from the criteria used in judging the commentary.

      Regarding the marking of rubrics, and even though most of Shih’s
interviewees regard commentary as a stand-alone piece, she notes that trai-
ners in the UK tend either to weight translation and commentary equally,
or to give commentary a much lower weighting ranging from 20% to 40%.
Generally, she finds that commentary was regarded as:
     a periphery or supplementary information for marking translation, simply
     because translation commentary can transcend the boundary and assess many
     different kinds of skills and competence that was well beyond the primary aim
     of a practical module. (2018: 306)
94                                                                        Rui Li

In the case of translation protocols being assessed as a supplement to a trans-
lation task, a couple of trainers have listed the rubrics they use. For example,
Hurtado Albir (2015) assigns 20% to the report’s quality, in conjunction
with 10% devoted to the analysis of the brief and 70% to translation quality.
Of the 20% given to reports, she further gives 80% to analysis and 20% to
composition. Adab (2000) mentions that the weighting she applies is 60%
for the translation for the target text and 40% for annotations.
      All of the above examples relate to individual written protocols. Kelly
(2005) offers a way of dividing marks in group reporting: each group is
awarded a numerical grade multiplied by the number of members, and each
group is authorized to share the points between its members, as long as there
is a maximum differential of 15% between the highest and the lowest grades.
Delegating this grading power to students is a means of motivating them
and it relieves the teacher of the fear of marking unequally and unfairly.
      In summary, it seems that protocol reporting is used more as a refe-
rence tool aiding teachers to understand students’ translation processes.
The emphasis should be laid on documenting what it is that students feel
must be reported. They should be empowered to try out various strategies
with the appropriate justification, while learning to translate without fear
of being penalized for adopting unconventional approaches.

2.12 Can written retrospective protocols be trusted?

Inconsistencies may exist between the protocols that students write and
the product that they deliver. In an empirical study, Angelone (2015) finds
that, although stylistic errors were well-documented in the IPDR logs, they
still appeared in the product. He suspects that documentation does not
necessarily result in fewer errors and that some students may have falsified
their reports.
      Cognitive science and the sciences of learning provide some ready ex-
planations as to why this may happen. Hansen, for one, points out that “the
verbal report of a subject comprises only a fraction of all the thoughts during
a process, and only those that the subject can single out are encoded into
verbal form” (2005: 516). Göpferich and Jääskeläinen (2009) also warn about
the validity and reliability of different verbal report procedures because the
Taking Stock of Written Retrospective Protocols Used in Translator Education   95

problems that student translators encounter in the process are so ill-defined
that neither they nor teachers have pre-determined procedures for solving
the problems, let alone for correcting answers without ambiguity. Human
memory may sometimes unintentionally manipulate information and affect
its accurate retrieval. According to Boud (2001), there is a clear separation
between writing for learning and writing for assessment purposes. It is
likely that students choose to falsify reports because they wish to portray
themselves in the best possible light.
      All of these explanations provide evidence that verbal protocols should
not be written exclusively as a tool for assessment by teachers. They also
clarify to some extent why immediate retrospective protocols are often
used to minimize the risk of distortion of data in experiments (Dam-Jensen
and Heine 2009). For all intents and purposes, trainers should be aware of
the possibility of inconsistencies and should continue to do all they can to
encourage truthful reporting.

3. Conclusion

We have endeavoured to paint diverse approaches to the design of retros-
pective introspection protocols. While it is impossible to identify best
practice in absolute terms, it is easy to see the gaps in our knowledge of this
tool. Trainers need to compare notes and possess a preliminary reference
framework in order to teach and assess students effectively. As Shih (2018)
suggests, we also need to try out different approaches through action rese-
arch, and should benefit from an increasing awareness of their deficiencies
and strengths.
      The synthesis also reveals that research on written retrospective
protocols is still dominated by trainers’ anecdotal accounts detailing what
they believe to be the best format and component of the tool. Experimental
studies on this topic tend to pit different data elicitation tools against each
other to see which one produces the most reliable data (e.g. Dam-Jensen
and Heine 2009; Angelone 2015). There have also been some longitudinal
efforts to track the development of student translation competence by over-
all qualitative evaluation (e.g. Norberg 2014) or by coding, counting and
analysing the number and nature of the problems reported over a semester
or semesters (e.g. Wu 2014; Cheng and Wu 2016).
96                                                                                Rui Li

     Finally, while the use of written retrospective protocols may have
been dwarfed in recent years in experimental studies by the use of process
recordings, keystroke logging and eye-tracking, their value cannot easily
be dismissed in real pedagogical settings. This is because, with the rise
of the 4EA cognition paradigm, translation is now increasingly viewed as
an embodied and situated cognitive activity (Risku 2002, 2013). Transla-
tors need to collaborate with partners, with different types of technology
(O’Brien 2011) and with a shared repository of resources (e.g. translation
memory and terminology bank) (Li C. 2021). The greatest strength of
written protocols4 is their ease of applicability, higher ecological validity
and, most importantly, the failure of keystroke logging and eye tracking to
capture information related to person to person collaboration. This again
provides a strong rationale for trainers to use different tools in triangulation
to understand better students’ acquisition of translation competence.

Bibliographical references

Adab, Beverly (2000) “Evaluating translation competence”, in Developing
    Translation Competence. Ed. by Christina Schäffner and Beverly Adab,
    Amsterdam/Philadelphia, John Benjamins, pp. 215-228.
Almanna, Ali (2016): The Routledge Course in Translation Annotation:
    Arabic-English-Arabic, London and New York, Routledge.
Angelone, Erik (2015) “The impact of process protocol self-analysis on
    errors in the translation product”, in Describing Cognitive Processes
    in Translation: Acts and Events. Ed. by M. Ehrensberger-Dow, B. Eng-
    lund Dimitrova, S. Hubscher-Davidson and U. Norberg, Amsterdam/
    Philadelphia, John Benjamins, pp. 105-124.
Angelone, Erik, and Álvaro Marín García (2017) “Expertise acquisition
    through deliberate practice”, Translation Spaces, 6:1, pp. 122-158.

4 Collaborative translation protocols, which can be recorded on online chat platforms, as
  students collaborate with each other in group projects, can also be seen as a pertinent
  example (e.g. Pavlovic 2007; Kiraly, Massey and Hofmann 2018; Li R. 2021).
Taking Stock of Written Retrospective Protocols Used in Translator Education   97

Bergen, David (2009) “The role of metacognition and cognitive conflict in
      the development of translation competence”, Across Languages and
      Cultures, 10:2, pp. 231-250.
Boud, David (2001) “Using journal writing to enhance reflective practice”,
      in Promoting Journal Writing in Adult Education. New Directions
      in Adult and Continuing Education. Ed. by L. M. English and M. A.
      Gillen, San Francisco, Jossey-Bass, pp. 9-18.
C h e n , Li n , a nd Ya n Z h a ng (2011) “翻 译 硕 士 专 业 学 位 论 文
      “翻译述评”的撰写模式研究 (Translation commentaries used in
      graduation theses of China’s MTI programs)”, Chinese Translators
      Journal, 32: 6, pp. 46-49.
Cheng, Si, and Qing Wu (2016) “从问题解决视角分析学习日志-
      中的笔译能力发展动态 (Dynamic competence growth seen in student
      translation journals from the problem-based lens)”, Chinese Translators
      Journal, 37: 1, pp. 51-57.
Dam-Jensen, Helle, and Carmen Heine (2009) “Process research methods
      and their application in the didactics of text production and transla-
      tion: Shedding light on the use of research methods in the university
      classroom”, Trans-kom Journal of Translation and Technical Commu-
      nication Research, 2:1, pp. 1-25.
Delisle, Jean (1998) “Le métalangage de l’enseignement de la traduction
      d’après les manuels”, in Enseignement de la traduction et traduction
      dans l'enseignement. Ed. by Jean Delisle and Hannelore Lee-Jahnke,
      Ottawa, Les Presses de l’Université Ottawa, pp. 185-242.
Fernández, Francesc, and Patrick Zabalbeascoa (2012a) “Correlating
      trainees’ translating performance with the quality of their metacog-
      nitive self-evaluation”, Perspectives, 20:4, pp. 463-478.
Fernández, Francesc, and Patrick Zabalbeascoa (2012b) “Developing trainee
      translators’ strategic subcompetence through metacognitive questi-
      onnaires”, Meta, 57:3, pp. 740-762.
Fox, Olivia (2000) “The use of translation diaries in a process-oriented
      translation teaching methodology”, in Developing Translation Com-
      petence. Ed. by Christina Schäffner and Beverly Adab, Amsterdam/
      Philadelphia, Benjamins, pp.115-130.
Galán-Mañas, Anabel, and Amparo Hurtado Albir (2015) “Competence
      assessment procedures in translator training”, The Interpreter and
      Translator Trainer, 9:1, pp. 63-82.
98                                                                       Rui Li

García Álvarez, Ana María (2007) “Evaluating students’ translation pro-
      cess in specialized translation: Translation commentary”, Journal of
      Specialised Translation, 7, pp. 139-163.
Gile, Daniel (2004) “Integrated problem and decision reporting as a trans-
      lator training tool”, Journal of Specialised Translation, 2, pp. 2-20.
Göpferich, Susanne (2009) “Towards a model of translation competence
      and its acquisition: The longitudinal study TransComp”, in Behind the
      Mind: Methods, Models and Results in Translation Process Research.
      Ed. by S. Göpferich, A. L. Jakobsen and I. M. Mees, Copenhagen,
      Samfundslitteratur Press, pp. 11-37.
Göpferich, Susanne, and Riitta Jääskeläinen (2009) “Process research into
      the development of translation competence: Where are we, and where
      do we need to go?”, Across Languages and Cultures, 10:2, pp. 169-191.
Hansen, Gyde (2005) “Experience and emotion in empirical translation
      research with think-aloud and retrospection”, Meta, 50:2, pp. 511-521.
Hansen, Gyde (2006) “Retrospection methods in translator training and
      translation research”, Journal of Specialised Translation, 5, pp. 2-41.
H u a n g , L a n ( 2 018) “ 翻 译 硕 士 专 业 教 学 中 翻 译 评 述 的 撰
      写策略和必要性研究——以英国威尔士斯旺西大学翻译课程为例
      (Strategies and necessity of writing translation commentaries: with
      Swansea University as a case study)”, Journal of Yantai Vocational
      College, 24:1, pp. 67-70.
Hurtado Albir, Amparo (2015) “The acquisition of translation competence.
      Competences, tasks, and assessment in translator training”, Meta,
      60:2, pp. 256-280.
Hurtado Albir, Amparo, and Christian Olalla-Soler (2016) “Procedures for
      assessing the acquisition of cultural competence in translator training”,
      The Interpreter and Translator Trainer, 10:3, pp. 318-342.
Ke, Ping, and Xiaosa Li (2012) “评注式翻译及其对翻译教学与研究的意义
      (Translation commentary and its implications for translator education
      and research)”, Foreign Languages Research, 4, pp. 78-83.
Kelly, Dorothy (2005): A Handbook for Translator Trainers, Manchester,
      St. Jerome.
Kenny, Mary Ann (2008) “Discussion, cooperation, collaboration: The
      impact of task structure on student interaction in a web-based trans-
      lation exercise module”, The Interpreter and Translator Trainer, 2:2,
      pp. 139-164.
Taking Stock of Written Retrospective Protocols Used in Translator Education   99

Kiraly, Donald, Gary Massey, and Susanne Hofmann (2018) “Beyond
     teaching, towards co-emergent praxis in translator education”, in
     Translation-Didaktik-Kompetenz. Ed. by B. Ahrens et al., Berlin,
     Frank Timme, pp. 11-64.
Kolb, David A. (1984): Experiential Learning: Experience as the Source of
     Learning and Development, Englewood Cliffs, New Jersey, Prentice
     Hall.
Krings, Hans Peter (2005) “Wege ins Labyrinth – Fragestellungen und
     Methoden der Übersetzungsprozessforschung im Überblick”, Meta,
     50:2, pp. 342-358.
Lee, Vivian (2015) “A model for using the reflective learning journal in
     the postgraduate translation practice classroom”, Perspectives, 23:3,
     pp. 489-505.
Lee-Jahnke, Hannelore (2005) “New cognitive approaches in process-ori-
     ented translation training”, Meta, 50:2, pp. 359-377.
Lee-Jahnke, Hannelore (2011) “Trendsetters and milestones in interdiscipli-
     nary process-oriented translation: Cognition, Emotion, Motivation”, in
     CIUTI-Forum 2010. Global Governance and Intercultural Dialogue:
     Translation and Interpreting in a New Geopolitical Setting. Ed. by
     M. Forstner and H. Lee-Jahnke, Bern, Oxford and Wien, Peter Lang,
     pp. 109-152.
Lee-Jahnke, Hannelore (2015) “A coach for translation training”, in CIU-
     TI-Forum 2014. Pooling Academic Excellence with Entrepreneurship
     for New Partnerships. Ed. by M. Forstner, H. Lee-Jahnke and Ming-
     jiong Chai, Bern, Oxford and Wien, Peter Lang, pp. 221-252.
Li, Changshuan (2009): Non-literary Translation, Beijing, Foreign Language
     Teaching and Research Press.
Li, Changshuang (2021) “以实践报告展示翻译能力——论翻译硕士专业
     学位研究生翻译实践报告的写作 (Translation competence seen in
     practice report: How to write a good translation practice report)”,
     Chinese Translators Journal, 42: 2, pp. 72-79.
Li, Rui (2021): Enacting Authentic Collaborative Learning through Interns-
     hip Translation Projects in Translator Education: An Ethnographic
     Case Study. Unpublished PhD thesis, Shanghai, Shanghai International
     Studies University.
Li, Xiaosa, and Ping Ke (2013a). “关注以过程为取向的翻译教学——
     以评注式翻译和同伴互评为例 (Process-oriented translation pedagogy
100                                                                      Rui Li

     seen in translation commentary and peer review)”, Shanghai Journal
     of Translators, 2, pp. 46-50.
Li, Xiaosa, and Ping Ke (2013b). “过程教学法在翻译教学中的应用——
     以同伴互评和评注式翻译为例 (Application of process-oriented trans-
     lation pedagogy in translation commentary and peer review)”, Foreign
     Language Education, 34: 5, pp. 106-109.
Massey, Gary (2016) “Collaborative feedback flows and how we can learn
     from them: Investigating a synergetic learning experience in translator
     education”, in Towards Authentic Experiential Learning in Translator
     Education. Ed. by D. Kiraly et al., Tübingen, Narr Francke Attempo,
     pp. 177-199.
Massey, Gary, and Maureen Ehrensberger-Dow (2011) “Commenting on
     translation: implications for translator training”, Journal of Specialised
     Translation, 16, pp. 26-41.
Massey, Gary, and Maureen Ehrensberger-Dow (2013) “Evaluating transla-
     tion processes: Opportunities and challenges”, in New Prospects and
     Perspectives for Educating Language Mediators. Ed. by D. Kiraly,
     S. Hansen-Schirra and K. Maksymski, Tübingen, Narr Francke Attempo,
     pp. 157-180.
Moon, Jennifer A. (1999): Learning Journals: A Handbook for Academics,
     Students and Professional Development, London, Kogan Page.
Mu, Lei, Bin Zou, and Dongmin Yang (2012) “翻译硕士专业学位论文
     参考模板探讨 (Formats used in graduation theses of China’s MTI
     programs)”, Academic Degrees & Graduate Education, 4, pp. 24-30.
Muñoz Martín, Ricardo (2017) “Looking toward the future of cognitive
     translation studies”, in Handbook of Translation and Cognition.
     Ed. by J. W. Schwieter and A. Ferreira, Hoboken, Wiley Blackwell,
     pp. 555-572.
Norberg, Ulf (2014) “Fostering self-reflection in translation students. The
     value of guided commentaries”, Translation and Interpreting Studies,
     9:1, pp. 150-164.
O’Brien, Sharon (2011) “Collaborative translation”, in Handbook of Trans-
     lation Studies 2. Ed. by Yves Gambier and Luc van Doorslaer, Amster-
     dams/Philadelphia, John Benjamins, pp. 17-20.
Orlando, Marc (2011) “Evaluation of translations in the training of profes-
     sional translators at the crossroads between theoretical, professional
Taking Stock of Written Retrospective Protocols Used in Translator Education   101

     and pedagogical practices”, The Interpreter and Translator Trainer,
     5:2, pp. 293-308.
Orlando, Marc (2012) “Training of professional translators in Australia:
     Process-oriented and product-oriented evaluation approaches”, in
     Global Trends in Translator and Interpreter Training. Ed. by S.
     Hubscher-Davidson and M. Borodo, London, Continuum, pp. 197-216.
PACTE Group (2017): Research Translation Competence. Ed.by Amparo
     Hurtado Albir, Amsterdam/Philadelphia, John Benjamins.
Pavlovic, Natasa (2007): Directionality in Collaborative Translation Proces-
     ses. Unpublished PhD thesis, Tarragona, Univeristat Rovira I Virgili.
Presas, Marisa (2012) “Training translators in the European higher education
     area”, The Interpreter and Translator Trainer, 6:2, pp. 139-169.
Pym, Anthony (2009) “Using process studies in translator training. Self-dis-
     covery through lousy experiments”, in Methodology, Technology and
     Innovation in Translation Process Research. Ed. by S. Göpferich, F.
     Alves and I. M. Mees, Copenhagen, Samfundslitteratur, pp. 135-156.
Qiao, Jie (2016) “基于译者能力的翻译专业汉英笔译评分模式新探 (New
     assessment methods for translation competence evaluation used in
     Chinese-English Translation)”, Shanghai Journal of Translators, 5,
     pp. 67-72.
Risku, Hanna (2002) “Situatedness in translation studies”, Cognitive Sys-
     tems Research, 3, pp. 523-533.
Risku, Hanna, and Florian Windhager (2013) “Extended translation. A
     sociocogtitive research agenda”, Target, 25:1, pp. 33-45.
Robinson, Bryan J., Clara I. López Rodríguez, and Maribel Tercedor (2008)
     “Neither born nor made, but socially constructed: Promoting interac-
     tive learning in an online environment”, TTR, 21:2, pp. 95-129.
Robinson, Bryan J., Maria Dolores Olvera-Lobo, and Juncal Gutiér-
     rez-Artacho (2017) “The professional approach to translator training
     revisited”. , accessed 29
     September 2019.
Schön, Donald A. (1987): Educating the reflective practitioner: toward a
     new design for teaching and learning in the professions, San Francisco,
     Jossey-Bass.
Sewell, Penelope (2002): Translation Commentary: The Art Revisited. A
     Study of French Texts, Dublin, Philomel.
102                                                                    Rui Li

Shei, Chris C.-C. (2005a) “Integrating content learning and ESL writing in
     a translation commentary writing aid”, Computer Assisted Language
     Learning, 18:1, pp. 33-48.
Shei, Chris C.-C. (2005b) “Translation commentary: A happy medium
     between translation curriculum and EAP”, System, 33, pp. 309-325.
Shih, Claire Y. (2011) “Learning from writing reflective learning journals
     in a theory-based translation module: Students’ perspectives”, The
     Interpreter and Translator Trainer, 5:2, pp. 309-324.
Shih, Claire Y. (2018) “Translation commentary re-examined in the eyes
     of translator educators at British universities”, Journal of Specialised
     Translation, 30, pp. 291-311.
Sun, Sanjun, and Wen Ren (2019) “翻译硕士学位论文模式探究 (Investi-
     gating graduation thesis formats of China’s MTI programs)”, Chinese
     Translators Journal, 40: 4, pp. 82-90.
Thelen, Marcel (2016) “Collaborative translation in translator training”,
     KSJ, 4:3, pp. 253-269.
Wang, Kenny, and Chong Han (2013) “Accomplishment in the multitude
     of counsellors: Peer feedback in translation training”, Translation &
     Interpreting, 5:2, pp. 62-75.
Wu, Qing (2014) “学习日志呈现的笔译能力发展进程及其对笔译教学的启示
     (Translation competence acquisition from learning journals and the
     implications for translation pedagogy)”, Chinese Translators Journal,
     35: 4, pp. 45-53.
You can also read