Students' Dynamic Assessment Via Online Chat

Page created by Thomas Rhodes
 
CONTINUE READING
Ana Oskoz                                                                                 513

            Students’ Dynamic Assessment
                  Via Online Chat
                                     ANA OSKOZ
                  University of Maryland, Baltimore County

ABSTRACT

    While there is ample documentation on the use of synchronous computer-medi-
    ated communication (SCMC) in the foreign language classroom for instructional
    purposes (Beauvois, 1994, 1998; Beauvois & Eledge, 1996; Chun, 1994; Dar-
    hower, 2002; Kelm, 1992, Kern, 1995; Warschauer, 1996), research devoted to
    assessment in this area is quite rare (Heather, 2003; Oscoz, 2003). One reason for
    this lack of research is the process-oriented nature of SCMC that demands new
    research and evaluation tools (Furstenberg, Levet, English, & Maillet, 2001).
    This study explores the possibility of applying dynamic assessment (DA), which
    focuses on process rather than on the product, to SCMC. The study draws on the
    work of Antón (2003), who examined students’ performance in oral interaction
    following DA techniques and on Aljaafreh and Lantolf’s (1994) 5-level scale
    (based on the frequency and type of assistance provided to the learner) to as-
    sess learners’ language development in SCMC. The data presented in this study
    shows that the application of the 5-level scale makes it possible to obtain a more
    accurate picture of learners’ stage of development. In spite of the benefits of DA,
    the study also suggests that the traditional modes of assessment are still required
    to assess students’ performance in SCMC. As Johnson (2004) stated, both modes
    are needed to obtain a richer and more complete understanding of students’ lan-
    guage development.

KEYWORDS

Synchronous Computer-mediated Communication (SCMC), Dynamic Assessment (DA),
Assessment in SCMC, Process in SCMC

INTRODUCTION
There is ample documentation on the use of synchronous computer-mediated com-
munication (SCMC) in the foreign language classroom (Beauvois, 1994, 1997a,
1997b; 1998; Beauvois & Eledge, 1996; Chun, 1994; Cononelos & Oliva 1993;
Darhower, 2002; Kelm, 1992, 1996; Kern, 1995; Nicholas & Toporski, 1993;
Warschauer, 1996, 1997). Most studies, however, address only instructional appli-
cations. Research devoted to assessment in this area is quite rare (Heather, 2003;
Oscoz, 2003) and limited in scope. Instructors evaluating students’ performance

CALICO Journal, 22 (3), p-p 513-536.                          © 2005 CALICO Journal
514                                               CALICO Journal, Vol. 22, No. 3
in SCMC have primarily looked at whether students participated in completion of
the assignments (Kelm, 1996).
   Two possible reasons emerge for the deficiency in research on student perfor-
mance in SCMC. First, when reviewing research on assessment in technology, one
notes that most technology applications in second language (L2) assessment are
related to standardized testing and adaptive tests. Published research has focused
primarily on development of computer-based testing (CBT) or computer-adaptive
testing (CAT), or on comparisons between CBT and paper-and-pencil tests. Stud-
ies have analyzed item-bank construction, item selection, student performance,
scoring, and test delivery and administration (Brown, 1997; Chalhoub-Deville,
1999; Chalhoub-Deville & Deville 1999; Dunkel, 1999). Some researchers have
called for a widening of the scope of this research on technology and L2 testing.
Laurier (2000), for example, criticized the “domination of adaptive testing in the
research on the use of computers for language testing and assessment” (p. 98) and
called for a greater attention to the link between technology assessment and the
instructional process. Thus, if we agree with Laurier’s critique, it seems logical to
assess students’ performance in SCMC.
   Second, assessment in SCMC seems a difficult task to undertake because stu-
dents’ work is situated within a new medium—network-based communication,
within a new learning environment—collaborative rather than individual, and it
is process- rather than result-oriented (Furstenberg, Levet, English, & Maillet,
2001). The pedagogical shifts (from individual to collaborative and from product
to process) “demand new evaluation tools and new research agendas that are both
congruent to the goals and the context” (Furstenberg et al., 2001, p. 92). Dy-
namic assessment (DA), which focuses on the learning process rather than on the
product, may serve as a useful framework for assessing students’ performance in
SCMC. Rather than focus on what learners know and can do at a given moment in
time as measured by their performance on a set of tasks, DA focuses on learners’
potential development.
   In this article, I propose that, through DA, students’ L2 performance can be as-
sessed in SCMC. The article begins with an overview of DA and how it is rooted
in Vygotsky’s theory of cognitive development, in particular, in the concept of
zone of proximal development (ZPD). I distinguish between DA and ‘static’ as-
sessment (SA). I focus on the work of Antón (2003), one of the few studies in L2
research that applies DA techniques to assess students’ performance. Aljaafreh
and Lantolf (1994), while not rooted in DA literature per se, discuss the difference
between learner’s actual development level and potential development level in
the L2 context and provide a 5-level scale to measure students’ development. The
5-level scale can also be applied to measure students’ performance in pair inter-
action (Ohta, 2000). I also review research in L2 assessment in SCMC (Heather,
2003; Oscoz, 2003) and point out how this research has ignored that SCMC is a
process- and collaboration-oriented medium that demands new research agendas
and new assessment tools (see Furstenberg et al., 2001 above). I will then review
findings in SCMC that provide evidence of how learners guide each other in the
Ana Oskoz                                                                       515
process of linguistic problems (Lee, 2002; Pellettieri, 2000). Finally, I present and
analyze data collected from SCMC following Aljaafreh and Lantolf’s (1994) 5-
level scale. Contrary to traditional testing, which measures learners’ actual level
as demonstrated by their performance on specific tasks or tests, DA focuses on
learners’ potential development as seen in the interactions that take place in the
ZPD. This article does not propose DA to replace time-honored testing practices.
Rather, it demonstrates how each type of test has a purpose in assessing learners’
performance (Johnson, 2004) and proposes that using a combination of both types
of tests results in a more complete picture of learners’ interlanguage develop-
ment.

DYNAMIC ASSESSMENT
DA refers to the “interaction between an examiner-as-intervener and a learner-
as-active participant, which seeks to estimate the degree of modifiability of the
learner and the means by which positive changes in cognitive functioning can
be induced and maintained” (Lidz, 1987, p. 4). DA has its roots in Vygotksy’s
theory of cognitive development. In particular, it is based on the concept of zone
of proximal development (ZPD) and mediation.
   Vygotsky (1978) defined the ZPD as “the distance between the actual develop-
ment level as determined by independent problem solving and the level of poten-
tial development as determined through problem solving under adult guidance or
in collaboration with more capable peers” (p. 86). In his work, Vygotsky distin-
guished between functions that were already mature and functions that were in the
process of maturing (Minick, 1987). The already mature functions are manifested
in the child’s independent cognitive activity and can be assessed by traditional
assessment techniques. The functions in the process of maturing are manifested
when the learner is working with an expert or a more capable peer. These func-
tions can be assessed in the ZPD. Vygotsky did not see the analysis of the ZPD as
a means of assessing learners’ potential or learning efficiency because, as Minick
(1987, p. 127) pointed out, Vygotsky was convinced that “although a child might
attain a more advanced level of mental functioning in social interaction than when
acting alone, the child’s current state of development skills limits the kinds of
behavior that are possible.” Thus, by analyzing the ZPD, it is possible to obtain a
more accurate picture of the learners’ actual level of development.
   DA, then, focuses on the learning processes and serves as a means of measur-
ing the ZPD and is opposed to SA that focuses on already learned products (Lidz,
1987). There are several theoretical and methodological differences between DA
and SA. From a theoretical point of view, the main difference between DA and SA
lies in a different understanding of the future (Poehner & Lantolf, 2003). To ex-
plain this difference, Poehner and Lantolf drew on Valsiner’s work (2001) in de-
velopmental psychology. Valsiner distinguished three models that theorize about
the future: essentialistic models, past-to-present models, and present-to-future
models. Because of their focus on the process of development, Poehner and Lan-
tolf (2003) focused on the difference between past-to-present and present-to-fu-
516                                               CALICO Journal, Vol. 22, No. 3
ture models. According to Valsiner (2001), in the past-to-present models, the past
life of an organism leads to its present stage of functioning. Thus, “development
is seen as a sequence of stages” (Valsiner, 2001, p. 86) that “a person is assumed
to pass through on the way to some final stage” (Poehner & Lantolf, 2003, p. 3).
As Valsiner claimed, “the underlying assumption that is axiomatically accepted
here is that the dynamic changes of the past that have led to the present can also
explain the future” (p. 86, italics in original). Applied to assessment, for example,
achievement tests are designed to know how well students are meeting the ex-
pectations of a program (Bachman, 1990). Achievement tests are not intended to
make predictions of the future, but rather to know exactly what the learner can
accomplish at one specific moment in time. However, as Poehner and Lantolf
continued, testing is also used to make inferences about the future, and still in
those cases, “past-to-present models of development are typically employed” (p.
3). Proficiency tests, for example, “assume that the future and the present are
equivalent” (p. 4) and that learners’ future performance is understood to be a close
reproduction of the performance on the test.
   Present-to-future models, on the other hand, focus on “the future-in-the-mak-
ing” (Valsiner, 2001). These models “focus on the processes of emergence—or
construction—of novelty” (p. 86). Thus, the focus of these models is on the new,
that which could not be accomplished before. These present-to-future models al-
low us to see the development before it occurs and to participate actively in the
developmental process itself (Poehner and Lantolf, 2003). According to Poehner
& Lantolf, present-to-future models predict the future on the basis of what a per-
son can accomplish in cooperation with other human agents. In the area of testing,
“ability is not seen as a stable trait of an individual but as a malleable feature of
the individual and the activities in which the individual participates” (Poehner &
Lantolf, 2003, p. 4). Thus, “performance on an aptitude test of any type, including
language, is not complete until we observe how the person behaves in response
to assistance” (p. 4). In this perspective, it is necessary to investigate the ZPD in
order to fully understand the individual’s potential to develop in the future. It is
important here that “while gaining a perspective on the person’s future, we are at
the same time helping the person attain the future” (p. 4). In this case, a learner’s
performance is not that of the individual, but rather a product of the dialogue
between interactants. Therefore, while past-to-present models observe learner’s
performance up to one specific moment in time, present-to-future models allow
examination of what learners could accomplish in the future.
   In addition to these epistemological differences, Stenberg and Grigorenko
(2002) also distinguished three major methodological differences between SA
and DA. The first refers to the respective roles of static states versus dynamic
processes. While SA focuses on the developed stage, DA focuses on the devel-
oping process. The second difference refers to the role of feedback. In SA, “an
examiner presents a graded sequence of problems and the test-taker responds to
each of the problems” (Stenberg & Grigorenko, 2002, p. 28), and there is little or
no feedback until the assessment is completed. In DA, however, either implicit or
Ana Oskoz                                                                        517
explicit feedback is provided. The third difference involves the relation between
the examiner and the examinee. In SA, the examiner is as neutral as possible
toward the examinee. In DA, the traditional one-way test situation is modified
and becomes a two-way interactive relationship between examiner and examinee.
These differences between DA and SA are especially relevant to assessment in
SCMC. SCMC is a process-oriented and collaboration-oriented medium in which
learners interact with one another. In this interaction, learners become guides for
one another (Beauvois, 1992), and provide each other either explicit or implicit
feedback that will potentially provide a more accurate picture of the learners’
language development.

DYNAMIC ASSESSMENT IN L2
DA, or the idea of focusing on the process rather than on the product, has been
recently applied in the L2 context (Antón, 2003; Scheneider & Ganschow, 2000).
In addition, although not directly related to the literature in DA, Aljaafreh and
Lantolf (1994) and Ohta (2000) discussed the notion of examining students’ po-
tential level of development. These studies provide us with the basis for pursuing
DA in SCMC.
  Antón (2003) reported on the work done on the use of DA interactive pro-
cedures to place Spanish majors in advanced Spanish language classes. In her
study, students were evaluated on pronunciation, fluency, grammar, vocabulary,
and comprehensibility. Two students were asked to narrate a story in the past.
Initially, both students had problems using the past tense. One of the students,
however, when provided with feedback and the opportunity to narrate the story
again, was able to appropriately narrate the story using the past tense. The other
student, in spite of the interaction with the interviewee, was still unable to produce
the correct verb form without assistance. Thus, while both students would have
been placed in the same classes based on their initial performance, in reality they
were at different levels of potential development based on their interaction with
the interviewer and, therefore, placed in different classes. While Antón was aware
of the limitations of the small sample of her study, she asserted “there is no doubt
that intervention during assessment results in rich information on their linguistic
capabilities that may be used for the development of individualized instructional
plans” (p. 15). DA techniques provide a deeper and more accurate understanding
of students’ interlanguage.
  Based on the idea that test scores are not guarantees that two learners are “at the
same stage in their interlanguage growth” (p. 473), Aljaafreh and Lantolf (1994)
advocated for assessment practices that include “learners’ potential level of de-
velopment” (p. 473). The potential level of development is examined through a
microgenetic analysis. Wertsch and Stone (1974, cited in Donato, 1994) defined
microgenesis “as the gradual course of skill acquisition during a training ses-
sion, experiment or interaction” (p. 42). To determine the microgenetic develop-
ment in the learner’s interlanguage, Aljaafreh and Lantolf developed a 5-level
scale utilizing two principles: the frequency and the type of assistance required
518                                                CALICO Journal, Vol. 22, No. 3
by the tester during the dialogic interaction with a tutor. The 5 proposed levels
represented different development stages: from other-regulation—when learners
rely on the tutor’s help to notice and correct an error—(levels 1-3) to self-regula-
tion—in which feedback is self-generated and automatic—(level 5), passing by
partial regulation—when learners are able to correct an error with minimal or no
obvious feedback—(level 4).
   Aljaafreh and Lantolf’s 5-level scale was applied by Ohta (2000) to examine
two Japanese learners’ interaction and microgenetic process. The learners, Hal
and Becky, engaged in a form-focused collaborative dialogue during a transla-
tion task. Ohta found that through the conversation with her partner, Becky—the
less proficient of the two learners—moved to Aljaafreh and Lantolf’s Level 4,
where she was able to correct a few of her own errors. Additionally, Ohta found
that Hal—the more proficient partner—also evidenced development through the
process of interaction. Therefore, Ohta’s study shows that learner’s interaction
emerges in a ZPD that promotes L2 development. Furthermore, by applying Al-
jaafreh and Lantolf’s 5-level scale, Ohta was able to observe the different stages
of potential development of both learners.
   The relevancy of these two studies to L2 assessment is that Aljaafreh and Lan-
tolf’s work provides a scale to assess learners’ interlanguage development that
can be applied to both examiner-examinee and examinee-examinee interaction.
According to Johnson (2004), the principal theoretical assumption behind a scale
using Aljaafreh and Lantolf’s two principles of type and frequency of assistance
is that “the more explicit assistance the candidate requires, the less advanced the
candidate is in his or her potential development within the ZPD” (p. 186). Using
this scale or one similar to it, learners’ potential development can be rated accord-
ing to the level of assistance required to complete the tasks successfully.

ASSESSMENT IN SCMC
Originally, SCMC was used in the L2 classroom because it provided a nonstress-
ful environment that encouraged students to participate in the target language
(Beauvois, 1992; 1993; 1994; Kelm, 1992). Because students and teachers were
more concerned with the content of what they said than with the accuracy of
their production, Kern (1995) suggested that accuracy was not one of the main
goals of SCMC. In addition, given that SCMC seemed an appropriate environ-
ment in which students could express their opinions freely, assessment in this
medium might have seemed cumbersome because “assigning a letter grade to an
assignment that was designed to allow students to openly communicate feelings
and opinions is especially difficult” (Kelm, 1992, p. 453). This is probably the
reason that, until recently, teachers have typically given students either full credit
for participation or no credit at all (Kelm, 1996). However, there is some recent
research that examines how to assess students’ performance in SCMC (Heather,
2003; Oscoz, 2003).
  Heather (2003) examined the validity of making inferences from computer-
mediated discourse to oral discourse by comparing 24 third-semester French stu-
Ana Oskoz                                                                         519
dents’ performance on two tests: a computer-mediated communicative test and
a group oral exam. For his study, Heather compared students’ performance in a
series of tasks both in SCMC and small group interaction. He found that although
students’ scores were not statistically different, their discourse differed in linguis-
tic and interactional features. Even though the results of the study did not support
the interchangeability of SCMC assessment for face-to-face assessment, Heather
did not rule out the use of computer-mediated communicative testing. Instead
of considering SCMC as an alternative to oral assessment, Heather argued for a
“better convergence and integration of instruction and assessment in classes that
utilize CMC” (p. 230) and suggested that testing using this electronic medium
should be understood within the instructional context in which it is used.
   Given that SCMC is an integral part of teaching practices (Bearden, 2001; Beau-
vois, 1992; Blake, 2000; Chun, 1994; Fidalgo-Eick, 2001; Fernández-García and
Martínez-Arbelaiz, 2003; Kelm, 1992; Kern, 1995; Lee, 2002; Pellettieri, 2000;
Sotillo, 2000; Warschauer, 1996), Oscoz (2003) considered it necessary in online
chat to include tasks which are normally used in the classroom to assess students’
performance. Oskoz compared 30 fourth-semester Spanish students’ performance
in SCMC in two tasks that are frequently used in this medium—jigsaw and free
discussion—on four different measures: quantity of output, syntactic complex-
ity, accuracy, and negotiation of meaning. As expected from previous research in
second language acquisition (Foster & Skehan, 1996; Pica, 1987; Pica, Young, &
Doughty, 1987; Robinson, 1995) and language testing (Chalhoub-Deville, 1995a,
1995b; Henning, 1983; Shohamy, 1983), students performed differently on these
four measures depending on the task. Students performed higher on quantity of
output and syntactic complexity in free discussion and higher in accuracy and
negotiation of meaning in jigsaw tasks. Only in reference to negotiation of mean-
ing, Oskoz acknowledged the extent to which interaction affects students’ perfor-
mance.
   Swain (2001) stated that the use of pair and group tasks for testing brings new
assessment needs. In small groups, Swain pointed out, “the performance is jointly
constructed and distributed across participants” (p. 296). Research in small-group
and pair testing (see, among others, Berry, 2000; cited in Swain, 2001; Fulcher,
1996; O’Sullivan, 2002) has found that interaction with another participant ei-
ther supports or handicaps test-takers’ performances. However, in spite of the un-
derstanding that interaction between individuals affects the other’s performance,
research that examines how learners (or examinees) can benefit in their interlan-
guage development from the help of another interactant is scarce (Antón, 2003).
Similarly, studies that assess students’ performance in SCMC provide students’
scores based on their language production without acknowledging the process-
oriented nature of this medium.
   In their discussion of assessment in computer-mediated communication, Furst-
enberg et al. (2001) stated that shifts in pedagogy from an individual orientation
to a collaborative one as well as from a product-oriented medium to a process-
oriented one results in the imperative for new evaluation tools and a new research
520                                               CALICO Journal, Vol. 22, No. 3
agenda that are congruent with the goals and context of instruction. DA, focusing
on the process rather than on the product, presents itself as an alternative approach
to assess students’ performance in SCMC. Donato (1994) and Ohta (2000) have
already shown that the ZPD occurs not only in collaboration with an expert, but
also in peer interactions. SCMC also creates an environment in which learners
become guides for one another in a process of scaffolding in the ZPD (Beauvois,
1997a). The collaborative nature of SCMC (given that students interact in pairs
or small groups) reduces some of the potential problems with the use and admin-
istration of DA in the classroom, such as time needed to conduct the assessment
(Antón, 2003). It is also possible to go back to students’ transcripts to provide a
more accurate diagnosis of learners’ potential level of development.

PROCESS IN SCMC
Beauvois (1997a) pointed out that SCMC creates “a new manifestation of the
process of ‘scaffolding’ and Vygotsky’s theory of ‘ZPD’” (p. 166). When learn-
ers discuss ideas in the networked computer environment, their thoughts become
visible on the screen, thus making it “possible for students to become guides for
another” (p. 166). Through the collaborative construction of knowledge, “the pro-
cess of production changes” (p. 166), and learners are able to achieve a perfor-
mance that they are unable to accomplish by themselves.
   Extensive research has been undertaken in the area of SCMC (Bearden, 2001;
Beauvois, 1992; Blake, 2000; Chun, 1994; Fidalgo-Eick, 2001; Fernández-Gar-
cía & Martínez-Arbelaiz, 2003; Kelm, 1992; Kern, 1995; Lee, 2002; Pellettieri,
2000; Sotillo, 2000; Warschauer, 1996). However, most of these studies tended to
compare students’ performance in SCMC to students’ oral performance (Bearden,
2001; Warschauer, 1996). Researchers initially investigated whether the interac-
tion patterns found in oral exchanges regarding negotiation of meaning would
transfer to the SCMC medium (Blake, 2000; Fidalgo-Eick, 2001; Fernández-
García & Martínez-Arbelaiz, 2003; Lee, 2002; Pellettieri, 2000), and whether
students would produce greater quantities and more syntactically complex lan-
guage in SCMC than in oral interaction (Chun, 1994; Kelm, 1992; Kern, 1995;
Warschauer, 1996). During this process, researchers became aware that the char-
acteristics of the medium influenced students’ performance in ways that were
different from oral interaction (Fernández-García and Martínez-Arbelaiz, 2002;
Lee, 2002). They realized that the characteristics of the medium, such as visual
saliency, allowed learners to help each other in the process of acquisition of L2
competence.
   Pellettieri (2000) investigated whether or not the negotiated interactions be-
tween dyads in SCMC fostered the provision of corrective feedback as well as
the incorporation of target-like forms in the subsequent dialogue. The analysis of
the data on 20 Intermediate-Spanish students showed that learners were provided
with both explicit and implicit feedback that pushed them to make modifications
to target-like forms. In addition, Pellettieri found that as students were producing
speech, they were also correcting themselves. Students even backspaced to make
Ana Oskoz                                                                       521

syntactic elaborations, which, in turn, pushed their utterance to a more advanced
level of syntax. The visual saliency of the SCMC form enables learners to think,
see, and edit their own production, thereby possibly increasing the opportunities
for learners to notice their errors with minimal outside feedback and take subse-
quent responsibility for error correction.
   Lee (2002), aware of the value of the social interaction in SCMC, examined
the types of devices that learners used in their interactions. She found that her 34
third-year Spanish students worked collaboratively to construct knowledge and
provide feedback to each other. Lee observed that through collective effort, learn-
ers were able to successfully solve lexical and morphological problems, such as
the use of the preterit or the imperfect, depending on the context. Analysis of the
data also showed that students engaged in self-correction of their linguistic errors,
which suggested to Lee that self-correction might be more frequent in SCMC than
in oral interaction. That is, because the messages are displayed on the screen and
students can see what they have written, they are more likely to correct mistakes
when necessary. Although frequent use of incorrect forms did not prevent students
from understanding each other or from continuing the conversation without any
attempt to correct each other, Lee’s study shows how SCMC provides an environ-
ment in which students “help each other to achieve a performance that they typi-
cally cannot execute alone” (p. 276).
   Therefore, studies in SCMC show that it is possible to observe how students
in SCMC assist each other and work collaboratively to construct knowledge by
providing either implicit or explicit feedback to each other.

DYNAMIC ASSESSMENT IN SCMC
Given that SCMC is a process- and collaboration-oriented medium, it seems ap-
propriate to examine the extent to which learners acquire L2 competence through
social interaction. The data presented in this article comes from online classroom
activities conducted at a public university on the East Coast. Five intact classes
at three different levels (two classes of Elementary Spanish I and II and one class
of Intermediate Spanish I) from the winter 2005 session participated in a series
of online activities (jigsaw puzzles, information-gap activities, role plays, and
free discussions). The activities were tailored to the students’ different levels of
proficiency. For each task, students were given 10 minutes to read, underline,
and take notes and then 20 minutes to complete the task. These different types of
tasks were selected because they are commonly used in the classroom. To avoid
the systematic and construct-irrelevant effects of proficiency level, gender, or per-
sonal factors that might influence the results of the studies, the students were ran-
domly paired, and the same dyads were maintained over all tasks. Aljaafreh and
Lantolf’s (1994) 5-level scale was applied to the data (see Table 1).
522                                                 CALICO Journal, Vol. 22, No. 3
Table 1
Levels of Internalization from Interpsychological to Intrapsychological Function-
inga
 Level      Description
 Level 1    The learner is unable to notice or correct the error, even with
            intervention.
 Level 2    The learner is able to notice the error, but cannot correct it, even
            with intervention, requiring explicit help.
 Level 3    The learner is able to notice and correct the error, but only with
            assistance. The learner understands the assistance and is able to
            incorporate the feedback offered.
 Level 4    The learner notices and corrects an error with minimal or no
            obvious feedback and begins to assume full responsibility for error
            correction. However, the structure is not yet fully internalized since
            the learner often produces the target form incorrectly. The learner
            may even reject feedback when unsolicited.
 Level 5   The learner becomes more consistent in using the target structure
           correctly in all contexts. The learner is fully able to notice and
           correct his/her own errors without intervention.
a
 Adopted from Ohta (2000)

  The fragments below were selected from Elementary Spanish I and II classes to
show that learners can help each other even at lower levels of proficiency.

Example 1 (Elementary Spanish I)
Alicia: yo comica!
        [I funny (feminine singular)]
Alicia: ***oops, tu comica
        [***oops, you funny (feminine singular)]
Brian: comico
        [funny (masculine singular)]
Alicia: si …
        [yes …]
Brian: mires por el masculino y femanino
        [(you) look for the masculine and feminine]
(see activity in Appendix A)

  In a previous turn, Brian had made a comment to which Alicia responded that
her partner, Brian, was a funny person. First, Alicia refers to herself (yo ‘I’), but in
her next turn she self-corrects by changing ‘I’ for ‘you.’ However, in both turns,
she assigns feminine gender to the adjective. Brian corrects the gender of the ad-
jective from feminine to masculine. Although Alicia acknowledges the correction,
Brian further explains that Alicia should pay attention to the gender of the referent.
Ana Oskoz                                                                       523
The learners finished the task at this point, so it is not possible to know whether
Alicia would have incorporated Brian’s advice to attend to the gender agreement
in the future. In this case, Alicia is first able to notice one of her own errors with
no obvious feedback regarding the personal pronouns and moves from yo to tu.
It is not possible to know whether she internalized the structure, but at least it is
possible to observe that there was an error for which she takes full responsibility
(level 4). Alicia, however, does not notice that she produced the incorrect gender
in comica, and it is Brian who notices it and provides Alicia the assistance to cor-
rect the gender from comica to comico. Alicia understands the assistance, which
would imply she is a level 3 in the Aljaafreh and Lantolf’s scale. Since they fin-
ished the conversation at this point, it is not possible to know whether Alicia was
able to incorporate the feedback into her interlanguage.
   In the following dialogue, John and Lori were discussing two different topics.
The first topic was a reading about differences between men and women regard-
ing the time spent on house chores. The second topic was a role play in which
students were preparing a surprise party for the Spanish class.

Example 2 (Elementary Spanish II)
John: como se dice think en español
      [how do you say “to think” in Spanish]
Lori: tu piensas tus padres y tu son diferente en los tipos de trabajar?
      [do you think your parents and you are different in the types of works]
Lori: pensar
      [to think]
(see activity in Appendix B)
…
Lori: si… yo pienos tambien
      [yes … i think too]
John: pienos = to think?
      [think = to think?]
Lori: okay pensar is the verb to think
      [okay, pensar is the verb to think]
Lori: it is stem changing
      [it is stem changing]
Lori: therefore penar in the yo form is pienso
      [therefore penar in the yo form is pienso]
Lori: tu pienasa
      [you think (incorrect spelling)]
Lori: piensas
      [you think]
John: si
      [yes]
(see activity in Appendix C)
524                                                CALICO Journal, Vol. 22, No. 3
   In the first fragment of the conversation, John asked Lori how to say ‘think’ in
Spanish. Incidentally, Lori had just produced the verb pensar in the first person
pienso in the turn following the question. Lori provides the meaning of ‘think,’
pensar. Later on, when they are talking about the party, Lori misspells pienso as
pienos. John questions the use of pienos and asks whether it means ‘to think.’
This starts a metalinguistic explanation by Lori who explains that pensar is a
stem-changing verb, and, therefore, the first person is pienso, the second piensa.
Even here, when she makes another spelling mistake, Lori is fully aware of her
error and corrects it without any other intervention on the part of John. Lee (2002)
pointed out that it is difficult to know whether incorrect forms are due to lack of
typing skills or lack of appropriate knowledge. In this case, given the explanation
Lori provides about pienso being a stem-changing verb, it can be argued that it is
a typing mistake and that she has the stem changing rules internalized (level 5).
However, if we examine John’s discourse, it is possible to observe how he moves
from not knowing how to say pensar to understanding that it is a stem-changing
verb. In this process, John asks for help twice when he does not know how to say
pensar and when he has questions about the misspelled pienos. Lee (2000) and
Fernández-García and Martínez-Arbelaiz (2002) pointed out that the use of incor-
rect forms does not keep learners from continuing the conversation as long as they
understand each other. The fact that John asks for help a second time suggests that
he still needs assistance from his partner. Because pensar did not appear in the
transcripts again after he acknowledges Lori’s explanation, it is not possible to
know whether he is able to incorporate the feedback offered by Lori.
   In contrast to Example (2), which could have been considered a spelling mis-
take on Lori’s part, Example (3) shows how the learner corrects herself with no
feedback from her partner.

Example 3 (Elementary Spanish II)
Jennifer: estudias el fin de semana pasado?
          [do you study last weekend?]
Amy:      no
          [no]
Jennifer: *estudiaste
          [*you studied]
Amy:      no
          [no]
Amy:      escribe el papel de español y no estudiar
          [writes the paper for Spanish and not to study]
Jennifer: yo tambien no estudiaba
          [I also did not studied (use of imperfect instead of preterit)]
(see activity in Appendix B)

  In this situation, Jennifer asks Amy whether she studied the previous weekend
but uses the form of the present tense estudias instead of the correct form of the
Ana Oskoz                                                                        525
preterit, estudiaste. However, Jennifer takes full responsibility for error correction
and produces the correct form estudiaste with no intervention from her partner.
Lee (2002) suggested that because learners are able to read their postings once
they are displayed on the screen, this type of self-correction might be frequent in
SCMC. However, later on, when Jennifer wants to say that she did not study ei-
ther, she uses the first person of the imperfect, estudiaba, instead of using the first
person of the preterit form, estudie. Thus, while Jennifer is aware that she needs
to use the preterit and knows how to produce it, the distinction in use between
preterit or imperfect is not internalized. Therefore, this fragment would suggest
that Jennifer would be in level 4 of Aljaafreh and Lantolf’s (1994) scale.
  The following fragment, however, shows that Jennifer seems to be at a stage in
which she is internalizing some of the rules that have been taught in the class.

Example 4 (Elementary Spanish II)
Jennifer: soy aburrido
          [I am bored (use of ser instead of estar)]
Jennifer: *estoy aburrido
          [*I am bored (use of estar)]
(see activity in Appendix B)

   In Spanish, there are two different forms for the verb ‘to be’ (ser and estar), and
it is not uncommon for students to confuse them. In this case, Jennifer uses the
form soy instead of estoy, both of which (soy and estoy) would be the equivalent
of ‘I am’ in English. Jennifer is able to notice and correct her error without any
intervention. Because Jennifer does not repeat this form again in the dialogue, it
is not possible to know whether she internalized it. However, based on her perfor-
mance, it could be argued that Jennifer is at or somewhere between levels 4 and
5.
   The following fragment is an example of a student who is initially able to notice
an error but cannot correct it later, even with intervention.

Example 5 (Elementary Spanish II)
John: le gusta cake?
      [does she like cake?]
John: 2 pm?
      [2pm?]
Lori: si me gusta cake
      [yes, I like cake]
John: no, le gusta cake, te gusta
      [No, you don’t like cake, you like it]
John: le (amigo)
      [le (friend)]
Lori: yo comprende
      [i understands]
526                                               CALICO Journal, Vol. 22, No. 3
Lori: bueno!!!
      [good!!!]
Lori: que musica te gusta
      [what music do you like]
John: le gusta metal
      [he likes metal]
Lori: te gusta swing?
      [do you like swing?]
John: no le gusta swing
      [he doesn’t like swing]
(see activity in Appendix C)

   In Example 5, when John asks Lori whether she likes cake, John uses the form
le appropriate for third person él/ella or second person formal usted (which was
not the appropriate form in this case because students were accustomed to the
informal form tú). Lori disregards John’s error and simply answers his question
by saying that si, me gusta cake ‘yes, I like cake.’ John, however, notices his er-
ror with no obvious feedback, and produces the correct form when he says no, no
le gusta cake, te gusta, and adds that le is for a third person—for a friend. Lori
acknowledges that she understood what John intended to say (yo comprende) and
continues with the conversation by asking John que musica te gusta. Although
John was able to correct his own performance before, instead of using the form me
gusta for ‘I like’ he uses le gusta, again in the third person. Lori does not correct
John but continues with the conversation asking him whether te gusta swing to
which John answers le gusta swing again using the third person singular of gustar
instead of using the first person singular me gusta.
   In this dialogue, then, we observe how John at one point was able to notice the
error and correct it, which would imply he is at level 4. However, in spite of this
initial correction, we also observe how John has not internalized the structure
because he repeats the same error later on and does not correct it. Further, Lori
provides implicit feedback when she uses the second form of gustar (te gusta)
to ask her questions. Although not directly stating that there is an error in John’s
performance, Lori is making it evident that different forms are used for the verb
gustar. One could wonder if John is not correcting it because the error does not
imply a breakdown in the communication and he continues with the conversation.
But John’s insistence on the form le gusta instead of me gusta would suggest that
he is unable to notice his error, even with Lori’s implicit feedback.
   These examples show us that it is possible to apply DA techniques to L2 assess-
ment. A question that immediately arises is how we are to measure the process
of learning (Sternberg & Grigorenko, 2002). Poehner and Lantolf (2003) distin-
guished two different perspectives within the DA movement: the interventionist
approach and the interactionist approach. The interventionists “tend to follow a
quantitative approach, and so lend themselves to a more psychometric orienta-
tion” (p. 6). In this tradition, the learner is first tested while working alone. In a
Ana Oskoz                                                                       527
later stage, the examiner provides a series of standardized strategies of interven-
tions. The standardization of the aids and prompts used, “and the number of points
assigned to each prompt can be reported along with an individual’s score or grade
on the assessment” (Lantolf & Poehner, 2004, p. 3). By contrast, the classroom
environment provides opportunities for more interactive approaches. These ap-
proaches are more interested in “gaining insight into the kind of psychological
processes that the [learner] might be capable of in the next or proximal phase of
development” (Minick, 1987, p. 127) independent of frequency and/or type of
assistance. In these cases, the examiner provides help and feedback as required
by the examinee. Poehner and Lantolf (2003) pointed out that whether one “opts
to use an interventionist or interactionist approach depends on the goals and cir-
cumstances under which assessment is to be conducted” (p. 22). With large popu-
lations, Poehner and Lantolf continued, interventionist standardized approaches
would be more appropriate. Interactionist approaches, however, “are likely to be
more useful in a classroom setting” (p. 24). Therefore, for situations as the one
described in this study, an interactionist approach seems appropriate. An analysis
of the interaction would be an exceptional source of information regarding the
learning and instructional processes.
   DA, however, is not intended to replace more traditional forms of assessment.
Each one has its function (Johnson, 2004). While “the traditional method mea-
sures the learner’s actual level of language development, what the learner can
do without any assistance at a particular moment in time” (Johnson, 2004, p.
187), DA will help assess the learner’s potential development. For example, An-
tón (2003) provided students two scores based on what students could do with
help and without help. The students were evaluated based on the descriptors of
the ACTFL proficiency guidelines for Novice, Intermediate, and Advanced levels.
The numerical score of each student was also accompanied by a qualitative as-
sessment. This assessment reported the examiner’s observations during the oral
interview, the assessment of the learner’s strengths and weaknesses, and specific
recommendations for improvement. Similar assessment techniques can be applied
to SCMC. For example, as in the data presented above, it is possible to observe
the difference in language development levels of the students. Some students will
be at the expected level, others below, and others above. Therefore, there is still a
need to assess students’ mastery of the linguistic codes to measure learners’ actual
level of development using SA techniques as well as assessment of students’ po-
tential ability utilizing DA procedures.

CONCLUSION
This study has explored the possibility of applying DA to SCMC. Given the pro-
cess-oriented nature of the electronic medium (Furstenberg et al., 2001), DA,
which focuses on process rather than on the product, seems to be an appropriate
means to assess students’ performance in SCMC. Antón (2003), when assessing
students for placement purposes, proved that this type of assessment is viable in
the L2 classroom. Aljaafreh and Lantolf (1994), although not explicitly rooted
in DA literature, provided a 5-level scale based on the frequency and type of as-
528                                                   CALICO Journal, Vol. 22, No. 3
sistance provided to the learner that helps to assess the stage of the learner’s lan-
guage development in both learner-tutor (Aljaafreh & Lantolf, 1994) and learner-
learner interaction (Ohta, 2000).
   The current study has applied this 5-level scale to students’ interaction in SCMC,
and analysis of the data shows that it is possible to observe students’ potential
level of development in online chat. The study does not imply, however, that
traditional modes of assessment are not required to assess students’ performance
in SCMC. While SA provides information regarding the actual level of develop-
ment, DA provides information regarding the potential level of development. In
SCMC, Heather (2003) and Oscoz (2003) provided scales and guidelines to assess
students’ current level of development in this medium. Likewise, Aljaafreh and
Lantolf’s (1994) scale, or a more finely honed one which will better suit the char-
acteristics of the SCMC medium, will provide information regarding students’
potential development. By utilizing techniques from both SA and DA in SCMC, it
will be possible to obtain a richer and more complete understanding of student’s
interlanguage development.

REFERENCES

Aljaafreh, A., & Lantolf, J. P. (1994). Negative feedback as regulation and second language
          learning in the zone of proximal development. Modern Language Review, 78,
          465-483.
Antón, M. (2003, March). Dynamic assessment of advanced foreign language learners.
        Paper presented at the meeting of the American Association of Applied Linguis-
        tics, Washington, DC.
Bachman, L. (1990). Language testing. Oxford: Oxford University Press.
Bearden, R. (2001, March). An interactionist study of small-group oral discussion vs. com-
         puter-assisted class discussion (CACD) between native speakers and nonnative
         learners for Spanish. Paper presented at the meeting of the American Associa-
         tion of Applied Linguistics, Saint Louis, MO.
Beauvois, M. (1992). Computer-assisted classroom discussion in the foreign language
         classroom: Conversation in slow motion. Foreign Language Annals, 25, 455-
         464.
Beauvois, M. (1993). E-talk: Empowering students through electronic discussion in the
         foreign language classroom. The Ram’s Horn, 7, 41-47.
Beauvois, M. (1994). E-talk: Attitudes and motivation in computer-assisted classroom dis-
         cussion. Computers and the Humanities, 28, 177-190.
Beauvois, M. (1997a). Computer-mediated communication (CMC), technology for im-
         proving speaking and writing. In M. D. Bush & R. M. Terry (Eds.), Technol-
         ogy-enhanced language learning (pp. 165-184). Lincolnwood, IL: National
         Textbook Company.
Beauvois, M. (1997b). High-tech, high-touch: From discussion to composition in the net-
         worked classroom. Computer Assisted Language Learning, 10, 57-69.
Ana Oskoz                                                                             529
Beauvois, M. (1998). Conversations in slow motion: Computer-mediated communication
         in the foreign language classroom. The Canadian Modern Language Review, 54,
         198-217.
Beauvois, M., & Eledge, J. (1996). Personality types and megabytes: Student attitudes
         toward computer-mediated communication (CMC) in the language classroom.
         CALICO Journal, 13 (2/3), 27-45.
Blake, R. (2000). Computer-mediated communication: A window on L2 Spanish interlan-
          guage. Language Learning & Technology, 4, 120-136. Retrieved April 25, 2005,
          from http://llt.msu.edu/vol4num1/blake/default.html
Brown, J. D. (1997). Computers in language testing: Present research and some future di-
         rections. Language Learning & Technology, 1, 44-59. Retrieved April 25, 2005,
         from http://llt.msu.edu/vol1num1/brown/default.html
Chalhoub-Deville, M. (1995a). Deriving oral assessments scales across different tests and
        rater groups. Language Testing, 12, 16-33.
Chalhoub-Deville, M. (1995b). A contextualized approach to describing oral language pro-
        ficiency. Language Learning, 45, 251-281.
Chalhoub-Deville, M. (Ed.). (1999). Issues in computer-adaptive testing of reading profi-
        ciency. Cambridge: University of Cambridge Local Examinations Syndicate and
        Cambridge University Press.
Chalhoub-Deville, M., & Deville, C. (1999). Computer-adaptive testing in second lan-
        guage contexts. Annual Review of Applied Linguistics, 19, 273-299.
Chun, D. (1994). Using computer networking to facilitate the acquisition of interactive
         competence. System, 22, 17-31.
Cononelos, T., & Oliva, M. (1993). Using computer networks to enhance foreign language/
        culture education. Foreign Language Annals, 26, 525-534.
Darhower, M. (2002). Interactional features of synchronous computer-mediated communi-
        cation in the intermediate L2 class: A sociocultural case study. CALICO Journal,
        19 (2), 249-277.
Donato, R. (1994). Collective scaffolding on second language learning. In J. Lantolf & G.
         Appel (Eds.), Vygotskian approaches to second language research (pp. 33-56).
         Westport, CT: Ablex.
Dunkel, P. A. (1999). Considerations in developing or using second/foreign language pro-
         ficiency computer-adaptive tests. Language Learning & Technology, 2, 77-93.
         Retrieved April 25, 2005, from http://llt.msu.edu/vol2num2/article4
Fernández-García, M., & Martínez-Arbelaiz, A. (2002). Negotiation of meaning in nonna-
         tive speaker-nonnative speaker synchronous discussions. CALICO Journal, 19
         (2), 279-294.
Fidalgo-Eick, M. (2001). Synchronous on-line negotiation of meaning by intermediate
         learners of Spanish. Unpublished doctoral dissertation, University of Iowa.
Foster, P., & Skehan, P. (1996). The influence of planning and task type on second language
            performance. Studies in Second Language Acquisition, 18, 299-323.
Fulcher, G. (1996). Testing tasks: Issues in task design and the group oral. Language Test-
          ing, 13, 23-51.
530                                                    CALICO Journal, Vol. 22, No. 3
Furstenberg, G., Levet, S., English, K., & Maillet, K. (2001). Giving a virtual voice to the
         silent language of culture: The cultura project. Language Learning & Technol-
         ogy, 5, 55-102. Retrieved April 25, 2005, from http://llt.msu.edu/vol5num1/fur-
         stenberg/default.pdf
Heather, J. (2003). The validity of computer-mediated communicative language tests. Un-
          published doctoral dissertation, The University of Arizona.
Henning, G. (1983). Oral proficiency testing: Comparative validities of interview, imita-
         tion, and completion methods. Language Learning, 33, 315-332.
Johnson, M. (2004). A philosophy of second language acquisition. New Haven, CT: Yale
         University Press.
Kelm, O. (1992). The use of synchronous computer networks in second language instruc-
         tion: A preliminary report. Foreign Language Annals, 25, 441-445.
Kelm, O. (1996). The application of computer network in foreign language education:
        Focusing on principles of second language acquisition. In M. Warschauer (Ed.),
        Telecollaboration in foreign language learning (pp. 19-28). Manoa, HI: Univer-
        sity of Hawai’i Press.
Kern, R. (1995). Restructuring classroom interaction with network computers: Effects on
          quantity and characteristics of language production. Modern Language Journal,
          79, 457-476.
Laurier, M. (2000). Can computerized testing be authentic? ReCALL, 12, 93-104.
Lee, L. (2002). Synchronous online exchanges: A study of modification devices on non-na-
          tive discourse. System, 30, 275-288.
Lidz, C. (Ed.). (1987). Dynamic assessment: An interactional approach to evaluating
         learning potentials. New York: Guilford Press.
Minick, N. (1987). Implications of Vygotsky’s theories for dynamic assessment. In C. Lidz
         (Ed.), Dynamic assessment: An interactional approach to evaluating learning
         potentials (pp. 116-140). New York: Guilford Press.
Nicholas, M. A., & Toporski, N. (1993). Developing “The critic’s corner:” Computer as-
          sisted language learning for upper-level Russian students. Foreign Language An-
          nals, 26, 469-478.
Ohta, A. (2000). Rethinking interaction in SLA: Developmentally appropriate assistance
          in the zone of proximal development and the acquisition of L2 grammar. In J.
          Lantolf (Ed.), Sociocultural theory and second language acquisition (pp. 51-78).
          Oxford: Oxford University Press.
Oscoz, A. (2003). Jigsaw and free discussion in synchronous computer-mediated commu-
         nication. Unpublished doctoral dissertation, University of Iowa.
O’Sullivan, B. (2001). Learner acquaintanceship and oral proficiency test pair-task perfor-
         mance. Language Testing, 19, 277-295.
Pellettieri, J. (2000). Negotiation in cyberspace: The role of chatting in the development of
           grammatical competence. In M. Warschauer & R. Kern (Eds.), Network-based
           language teaching: Concepts and practice (pp. 59-86). Cambridge: Cambridge
           University Press.
Pica, T. (1987). Second language acquisition, social interaction, and the classroom. Applied
           Linguistics, 8, 3-21.
Ana Oskoz                                                                             531
Pica, T., Young, R., & Doughty, C. (1987). The impact of interaction on comprehension.
           TESOL Quarterly, 21, 737-58.
Poehner, M., & Lantolf, J. (2004). Dynamic assessment in the language classroom.
         CALPER professional development document (CPDD) 0411. The Pennsylvania
         State University, Center for Advanced Language Proficiency, Education and Re-
         search.
Robinson, P. (1995). Task complexity and second language narrative discourse. Language
         Learning, 45, 99-140.
Salaberry, R., Barrette, K., Elliot, P., & Fernández-García, M. (2004). Impresiones. Upper
          Saddle River, NJ: Prentice Hall.
Scheneider, E., & Ganschow, L. (2000). Dynamic assessment and instructional strategies
         for learners who struggle to learn a foreign language. Dyslexia, 6, 72-82.
Shohamy, E. (1983). The stability of oral proficiency assessment on the oral interview test-
        ing procedures. Language Learning, 33, 527-540.
Sternberg, R., & Grigorenko, E. (2002). Dynamic testing. Cambridge: Cambridge Univer-
          sity Press.
Swain, M. (2001). Examining dialogue: Another approach to content specification and to
        validating inferences drawn from test scores. Language Testing, 18, 275-302.
Sotillo, S. (2000). Discourse functions and syntactic complexity in synchronous and asyn-
           chronous communication. Language Learning & Technology, 4, 82-119. Re-
           trieved April 25, 2005, from http://llt.msu.edu/vol4num1/sotillo/default.html
Valsiner, J. (2001). Process structure of semiotic mediation in human development. Human
           Development, 44, 84-97.
Vygotsky, L. (1978). Mind in society: The development of higher psychological processes.
         M. Cole, V. John-Steiner, S. Scribner, & E. Souberman (Eds.). Cambridge: Har-
         vard University Press.
Warschauer, M. (1996). Comparing face-to-face and electronic discussion in the second
        language classroom. CALICO Journal, 13 (2/3), 7-26.
Warschauer, M. (1997). Computer-mediated collaborative learning: Theory and practice.
        Modern Language Journal, 81, 470-481.
532                                              CALICO Journal, Vol. 22, No. 3
APPENDIX A
Activity based on Impresiones. Salaberry, R., Barrette, K., Elliot, P., and Fernán-
dez-García, M. (2004).

Even though it is the second month of classes, you still don’t know some of your
classmates well. You approach someone you don’t know well and ask the person
a few questions about weekend activities. Because you are a naturally curious
person, you ask a lot of questions that start with words like ¿Cuándo? ¿Cómo?
¿Por qué? ¿Cuál?, etc.

Use the blank spaces to write information about your partner. Follow the exam-
ple:

Tú: Hola, ¿cómo estás?
Compañero/a: Bien, gracias, ¿y tú?
Tú: Muy bien. Oye, ¿estudias español los fines de semana?
Compañero/a: No, no estudio español los fines de semana.
Tú: ¿Cuándo estudias español?.
Compañero/a: Estudio español durante la semana, los lunes, martes, miércoles y
              jueves.
Tú: ¿Estudias español por la mañana o por la tarde? ¿Cuántas horas estudias es-
pañol?
Compañero/a: …
Tú: …

      Los fines de semana DE
 Estudiar
 español
 Caminar por
 el parque
 Lavar la rope
 Hablar por
 teléfono
 Visitar a mis
 amigos
 Escribir
 cartas
 Leer el
 periódico
 Comer en un
 restaurante
 mexicano
You can also read