Conducting Behavioral Research on Amazon's Mechanical Turk

Page created by Timothy Harrison
 
CONTINUE READING
Conducting Behavioral Research on Amazon's Mechanical Turk
Conducting Behavioral Research
                                     on Amazon’s Mechanical Turk
                                              Winter Mason and Siddharth Suri
                                                           Yahoo! Research

                     Amazon’s Mechanical Turk is an online labor market where requesters post jobs and workers
                     choose which jobs to do for pay. The central purpose of this paper is to demonstrate how
                     to use this website for conducting behavioral research and lower the barrier to entry for re-
                     searchers who could benefit from this platform. We describe general techniques that apply to a
                     variety of types of research and experiments across disciplines. We begin by discussing some
                     of the advantages of doing experiments on Mechanical Turk, such as easy access to a large,
                     stable, and diverse subject pool, the low cost of doing experiments and faster iteration between
                     developing theory and executing experiments. We will discuss how the behavior of workers
                     compares to experts and to laboratory subjects. Then, we illustrate the mechanics of putting a
                     task on Mechanical Turk including recruiting subjects, executing the task, and reviewing the
                     work that was submitted. We also provide solutions to common problems that a researcher
                     might face when executing their research on this platform including techniques for conducting
                     synchronous experiments, methods to ensure high quality work, how to keep data private, and
                     how to maintain code security.

                        Introduction                                   “human computation” tasks. The idea behind its design was
                                                                       to build a platform for humans to do tasks that are very dif-
   The creation of the internet and its subsequent widespread          ficult or impossible for computers, such as extracting data
adoption has provided behavioral researchers with an addi-             from images, audio transcription and filtering adult content.
tional medium for conducting studies. In fact, researchers             In its essence, however, what Amazon created was a la-
from a variety of fields such as economics (Reiley, 1999;              bor market for micro-tasks (Huang, Zhang, Parkes, Gajos,
Hossain & Morgan, 2006), sociology (Salganik, Dodds, &                 & Chen, 2010). Today Amazon claims hundreds of thou-
Watts, 2006; Centola, 2010), and psychology (Nosek, 2007;              sands of workers, and roughly ten thousand employers, with
Birnbaum, 2000) have used the internet to conduct behav-               AMT serving as the meeting place and market (Pontin, 2007;
ioral experiments1 . The advantages and disadvantages of               Ipeirotis, 2010a). For this reason, it also serves as an ideal
online behavioral research relative to laboratory-based re-            platform for recruiting and compensating participants in on-
search have been explored in depth (see, for instance, Reips           line experiments. Since Mechanical Turk was initially in-
(2000) and Kraut et al. (2004)). Moreover, many methods                vented for human computation tasks, which are generally
for conducting online behavioral research have been devel-             quite different than behavioral experiments, it is not a priori
oped (e.g., Gosling & Johnson, 2010; Reips, 2002; Reips                clear how to conduct certain types of behavioral research,
& Birnbaum, 2011; Birnbaum, 2004). In this paper we de-                such as synchronous experiments, on this platform. One of
scribe a new tool that has emerged in the last 5 years for             the goals of this work is to exhibit how to achieve this.
conducting online behavioral research: crowdsourcing plat-                Mechanical Turk has already been used in a small number
forms. The term crowdsourcing has its origin in an article by          of online studies, which fall into three broad categories. First,
Howe (2006), who defined it as a job outsourced to an unde-            there is a burgeoning literature on how to combine the out-
fined group of people in the form of an open call. The key             put of a small number of cheaply paid workers in a way that
benefit of these platforms to behavioral researchers is that           rivals the quality of work by highly paid, domain-specific
they provide access to a persistently available, large set of
people who are willing to do tasks—including participating                1
                                                                            This is clearly not an exhaustive review of every study done on
in research studies—for relatively low pay. The crowdsourc-            the internet in these fields. We only aim to provide some salient
ing site with one of the largest participant pools is Amazon’s         examples.
Mechanical Turk2 (AMT), so it is the focus of this paper.                 2
                                                                            The name “Mechanical Turk” comes from a mechanical chess-
   Originally, Amazon built Mechanical Turk specifically for           playing automaton from the turn of the 18th century, designed to
                                                                       look like a Turkish “sorcerer,” which was able to move pieces and
                                                                       beat many opponents. While it was a technological marvel at the
                                                                       time, the real genius lay in a diminutive chess master hidden in
   Both authors contributed equally to this work. We would like        the workings of the machine (see http://en.wikipedia.org/
to thank Duncan J. Watts and Daniel G. Goldstein for encouraging       wiki/The Turk). Amazon’s Mechanical Turk was designed to
us to write this paper, and the many people who provided helpful       hide human workers in an automatic process, hence the name of
feedback on earlier drafts.                                            the platform.

                              Electronic copy available at: http://ssrn.com/abstract=1691163
Conducting Behavioral Research on Amazon's Mechanical Turk
2                                            BEHAVIORAL RESEARCH ON MECHANICAL TURK

experts. For example, the output of multiple workers was           searchers who would not otherwise have access, such as re-
combined for a variety of tasks related to natural language        searchers at smaller colleges and universities with limited
processing (Snow, O’Connor, Jurafsky, & Ng, 2008) and au-          subject pools (Smith & Leigh, 1997), or non-academic re-
dio transcription (Marge, Banerjee, & Rudnicky, 2010) to           searchers, with whom recruitment is generally limited to ads
be used as input to other research such as machine learning        posted online and flyers posted in public areas. While some
tasks. Second, there have been at least two studies show-          research necessarily requires participants to actually come
ing that the behavior of participants on Mechanical Turk is        into the lab, there are still many kinds of research that can
comparable to the behavior of laboratory subjects (Horton,         be done online.
Rand, & Zeckhauser, 2011; Paolacci, Chandler, & Ipeirotis,            Mechanical Turk offers the unique benefit of having an
2010). Finally, there are a few studies that use Mechani-          existing pool of potential participants that remains relatively
cal Turk for behavioral experiments, including Eriksson and        stable over time. For instance, many academic researchers
Simpson (2010) who studied gender, culture, and risk prefer-       experience the drought / flood cycle of undergraduate sub-
ences, Mason and Watts (2009), who used it to study the ef-        ject pools, with supply of participants exceeding demand
fects of pay rate on output quantity and quality, and Suri and     at the beginning and end of a semester, and then dropping
Watts (2011) who used it to study social dilemmas over net-        to almost nothing at all other times. In addition, standard
works. All of these examples suggest that Mechanical Turk is       methods of online experimentation, such as building a web-
a valid research environment that scientists are already using     site containing an experiment, often have “cold-start” prob-
to conduct experiments.                                            lems, where it takes time to recruit a panel of reliable par-
   Mechanical Turk is a powerful tool for researchers that         ticipants. Aside from some daily and weekly seasonalities,
has only begun to be tapped, and in this paper we offer in-        the participant availability on Mechanical Turk is fairly sta-
sights, instructions, and best practices for using this tool. In   ble (Ipeirotis, 2010a), with fluctuations in supply largely due
contrast to previous work that has demonstrated the valid-         to variability in the number of jobs available in the market.
ity of research on Mechanical Turk (Paolacci et al., 2010;            The single most important feature that Mechanical Turk
Buhrmester, Kwang, & Gosling, in press), the purpose of this       provides is access to a large, stable pool of people willing to
paper is to show how Mechanical Turk can be used for be-           participate in experiments for relatively low pay.
havioral research and to demonstrate best practices that en-
sure that researchers quickly get high quality data from their        Subject Pool Diversity. Another advantage of Mechani-
studies.                                                           cal Turk is that the workers tend to be from a very diverse
   There are two classes of researchers who may benefit from       background, spanning a wide range of age, ethnicity, socio-
this paper. First, there are many researchers who are not          economic status (SES), language, and country of origin. As
aware of Mechanical Turk and what is possible to do with           with most subject pools, the population of workers on AMT
it. In this guide, we exhibit the capabilities of Mechani-         is not representative of any one country or region. However,
cal Turk, and several possible use cases, so researchers can       the diversity on Mechanical Turk facilitates cross-cultural
decide whether this platform will aid their research agenda.       and international research (Eriksson & Simpson, 2010) at a
Second, there are researchers who are already interested in        very low cost and can broaden the validity of studies beyond
Mechanical Turk as a tool for conducting research, but may         the undergraduate population. We give detailed demograph-
not be aware of the particulars involved with and/or the best      ics of the subject pool in the Workers section.
practices for conducting research on Mechanical Turk. The             Low Cost and Built-in Payment Mechanism. One distinct
relevant information on the Mechanical Turk site can be diffi-     advantage Mechanical Turk is the low cost at which studies
cult to find, and is directed towards human computation tasks      can be conducted, which clearly compare favorably to paid
as opposed to behavioral research, so here we offer a detailed     laboratory participants. For example, Paolacci et al. (2010)
how-to guide for conducting research on Mechanical Turk.           replicated classic studies from the judgment and decision-
Why Mechanical Turk?                                               making literature at a cost of approximately $1.71/hour per
                                                                   participant, and obtained results that neatly paralleled the
   There are many advantages to online experimentation,            same studies conducted with undergraduates in a laboratory
many of which are detailed in prior work (Reips, 2000,             setting. Göritz, Wolff, and Goldstein (2008) showed that
2002). Naturally, Mechanical Turk shares many of these ad-         the hassle of using a third party payment mechanism, such
vantages, but also has some additional benefits. We highlight      as PayPal, can lower initial response rates in online experi-
three unique benefits of using Mechanical Turk as a platform       ments. Mechanical Turk skirts this issue by offering a built-in
for running online experiments: 1) subject pool access, 2)         mechanism to pay workers (both flat-rate and bonuses) that
subject pool diversity, and 3) low cost. We then discuss one       greatly reduces the difficulties of compensating individuals
of the key advantages of online experimentation that Me-           for their participation in studies.
chanical Turk shares—faster iteration between theory devel-
opment and experimentation. We discuss each of these in               Faster Theory / Experiment Cycle. One implicit goal in
turn.                                                              research is to maximize the efficiency with which one can
                                                                   go from generating hypotheses to testing them, analyzing the
  Subject Pool Access. Like other online recruitment meth-         results and updating the theory. Ideally, the limiting factor in
ods, Mechanical Turk offers access to participants for re-         this process is the time it takes to do careful science, but all

                              Electronic copy available at: http://ssrn.com/abstract=1691163
BEHAVIORAL RESEARCH ON MECHANICAL TURK                                               3
too often research is delayed because of the time it takes to      were only very slight differences between the results from
recruit participants and recover from errors in the methodol-      Mechanical Turk and participants recruited using the other
ogy. With access to a large pool of participants online, re-       methods, and qualitatively the results were identical. This is
cruitment is vastly simplified. Moreover, experiments can be       similar to Birnbaum (2000), who found internet users were
built and put on Mechanical Turk easily and rapidly, which         more logically consistent in their decisions than laboratory
further reduces the time to iterate the cycle of theory devel-     subjects.
opment and experimental execution.                                    There have also been a few studies which compare Me-
   Finally, we note that other methods of conducting behav-        chanical Turk behavioral with laboratory behavior. For ex-
ioral research may be comparable or even better than Me-           ample, the “Asian disease” problem (Tversky & Kahneman,
chanical Turk on one or more of the axes outlined above, but       1981) was also replicated by Horton et al. (2011) who also
taken as a whole, it is clear Mechanical Turk can be a useful      obtained qualitatively similar results. In the same study the
tool for many researchers.                                         authors found that workers “irrationally” cooperated in the
                                                                   one-shot Prisoner’s Dilemma game, replicating previous lab-
Validity of Worker Behavior                                        oratory studies (e.g., Cooper, DeJong, Forsythe, & Ross,
                                                                   1996). They also found, in a replication of another more
    Given the novel nature of Mechanical Turk, most of the         recent laboratory study (Shariff & Norenzayan, 2007), that
initial studies were focused on evaluating whether it could        providing a religious prime before the game increased the
effectively be used as a means of collecting valid data. At        level of cooperation. Suri and Watts (2011) replicated a pub-
first, these studies focused on whether workers on Mechani-        lic goods experiment that was conducted in the classroom
cal Turk could be used as substitutes for domain specific ex-      (Fehr & Gachter, 2000). In a public goods game the pay-
perts. For instance, Snow et al. (2008) showed that for a vari-    off to each player is structured so that the strategy for self-
ety of natural language processing takes, such as affect recog-    interested players is to contribute nothing. On the other hand
nition and word similarity, combining the output of just a few     the action that maximizes the sum of the payoffs of all play-
workers can equal the accuracy of expert labelers. Similarly,      ers, i.e. the social welfare, is to contribute the full endow-
Marge et al. (2010) compared workers’ audio transcriptions         ment. Thus, a public goods game is similar to an n-player
to domain experts, and found that after a small bias correc-       version of prisoner’s dilemma. Figure 1 compares the re-
tion, the combined outputs of the workers were of compa-           sults of the experiment conducted in a lab and on Mechanical
rable quality to the experts. Urbano, Morato, Marrero, and         Turk. The parameters for the two experiments were identi-
Martı́n (2010) crowdsourced similarity judgments on pieces         cal, except for the endowment which is easily corrected for
of music for the purposes of music information retrieval. Us-      by normalizing the contributions. From Figure 1 it is very
ing their techniques they obtained a partially ordered list of     clear that the workers exhibit very similar behavior to stu-
similarity judgments at a far cheaper cost than hiring experts     dents in the lab, and despite the difference in context and
while maintaining high agreement between the workers and           the relatively lower pay on Mechanical Turk, there were no
the experts. Alonso and Mizzaro (2009) conducted a study           significant differences.
that asked workers to rate the relevance of pairs of documents        In summary, there are numerous studies that show corre-
and topics and compared this to a gold standard given by ex-       spondence between the behavior of workers on Mechanical
perts. The output of the Turkers was similar in quality to that    Turk and behavior offline or in other online contexts. While
of the experts. Moreover, there were three cases where the         there are clearly differences between Mechanical Turk and
Turkers differed from the experts and the authors judged that      offline contexts, evidence that Mechanical Turk is a valid
the workers were correct!                                          means of collecting data is consistent and continues to ac-
    Of greater interest to behavioral researchers is whether the   cumulate.
results of studies conducted on Mechanical Turk are compa-
rable to results obtained in other online domains as well as       Organization of this Guide
offline settings. To this end, Buhrmester et al. (in press) com-
pared Mechanical Turk participants to a large Internet sam-           In the following sections we begin with a high level
ple with respect to several psychometric scales, and found         overview of Mechanical Turk, followed by an exposition of
no meaningful differences between the populations as well          methods for conducting different types of studies on Mechan-
as high test-retest reliability in the Mechanical Turk popula-     ical Turk. In the first half, we describe the basics of Mechan-
tion. Additionally, Paolacci et al. (2010) conducted replica-      ical Turk, including who uses it and why, and the general ter-
tions of standard judgment and decision-making experiments         minology associated with the platform. In the second half we
on Mechanical Turk, as well as with participants recruited         describe, at a conceptual level, how to conduct experiments
through online discussion boards and participants recruited        on Mechanical Turk. We will focus on new concepts that
from the subject pool at a large Midwestern university. The        come up in this environment that may not arise in the labo-
studies they replicated were the “Asian disease” problem           ratory or in other online settings around the issues of ethics,
to test framing effects (Tversky & Kahneman, 1981), the            privacy, and security. In this section we also discuss the on-
“Linda” problem to test the conjunction fallacy (Tversky &         line community that has sprung up around Mechanical Turk.
Kahneman, 1983), and the “physician” problem to test out-          We conclude by outlining some interesting open questions
come bias (Baron & Hershey, 1988). Quantitatively there            regarding research on Mechanical Turk. We also include an

                              Electronic copy available at: http://ssrn.com/abstract=1691163
4                                              BEHAVIORAL RESEARCH ON MECHANICAL TURK

                    10
                         Mechanical Turk vs. Lab Exps.
                     9                              Lab
                     8                              M. Turk                    0.05
Avg. Contribution

                     7
                     6                                                         0.04

                     5
                     4

                                                                     density
                     3
                                                                               0.03

                     2
                     1                                                         0.02

                     0   1 2 3 4 5 6 7 8 9 10
                                    Round                                      0.01
Figure 1. Comparison of contribution behavior in a lab-based pub-
lic goods game (Fehr & Gachter, 2000) and the same experiment
conducted on Mechanical Turk (Suri & Watts, 2010).
                                                                               0.00

                                                                                      20          40           60           80
                                                                                                       Age
appendix with engineering details required for building and
conducting experiments on Mechanical Turk, for researchers          Figure 2. Histogram (gray) and density plot (black) of reported
                                                                    ages of workers on Mechanical Turk.
and programmers who are building their experiments.

                           Mechanical Turk Basics                   roughly 32 years old, as can be seen in Figure 2. The differ-
                                                                    ent studies we compiled used different ranges when collect-
   There are two types of players on Mechanical Turk: re-
                                                                    ing information about income, so to summarize we classify
questers and workers. Requesters are the “employers,” and
                                                                    workers by the top of their declared income range, which can
the workers (also known as “Turkers” or “Providers”) are the
                                                                    be seen in Figure 3. This shows that the majority of workers
“employees”—or more accurately, the “independent contrac-
                                                                    earn roughly U.S. $30k per annum, though perhaps surpris-
tors”. The jobs offered on Mechanical Turk are referred to as
                                                                    ingly, some reported earning over $100k per year.
Human Intelligence Tasks (HITs). In this section we discuss
each of these in turn.                                                  Having multiple studies also allows us to check the inter-
                                                                    nal consistency of these self-reported demographics. Of the
Workers                                                             2896 workers, 207 (7.1%) participated in exactly two stud-
                                                                    ies, and of these 207, only one worker (0.4%) changed their
   In March of 2007, the New York Times reported that               answer on gender, age, education, or income. There were
there were more than 100,000 workers on Mechanical Turk             also 31 workers who completed more than two studies, and
in over 100 countries (Pontin, 2007). Although this interna-        of these, 3 had an inconsistency on gender, 1 had a devi-
tional diversity has been confirmed in many subsequent stud-        ation on age, and 7 had a deviation on income. Thus, we
ies (Mason & Watts, 2009; Ross, Irani, Silberman, Zaldivar,         conclude that the internal consistency of self-reported demo-
& Tomlinson, 2010; Paolacci et al., 2010), as of this writing       graphics on Mechanical Turk is high. This agrees with Rand
the majority of workers come from the United States and In-         (in press), who also found consistency in self-reported demo-
dia because Amazon only allows cash payment in U.S. dol-            graphics on Mechanical Turk, and with Voracek, Stieger, and
lars and Indian Rupees—although workers from any country            Gindl (2001) who compared the gender reported in an online
can spend their earnings on Amazon.com.                             survey (not on Mechanical Turk) conducted at the University
   Over the past three years, we have collected demograph-          of Vienna to the school’s records, and found a false response
ics for nearly 3,000 unique workers from 5 different stud-          rate below 3%.
ies (Mason & Watts, 2009; Suri & Watts, 2011; Mason &                   Given the low wages and relatively high income, one may
Watts, unpublished). We compiled these studies, and of the          wonder why people choose to work on Mechanical Turk at
2896 workers, 12.5% chose not to give their gender, and of          all. Two independent studies asked workers to indicate their
the remainder, 55% were female and 45% were male. These             reasons for doing work on Mechanical Turk. Ross et al.
demographics agree with other studies that have reported that       (2010) reported that 5% of U.S. workers and 13% of Indian
the majority of U.S. workers on Mechanical Turk are fe-             workers said “MTurk money is always necessary to make
male (Ipeirotis, 2010b; Ross et al., 2010). The median age of       basic ends meet”. Ipeirotis (2010b) asked a similar question
workers in our sample is 30 years old, and the average age is       but delved deeper into the motivations of the workers. He
BEHAVIORAL RESEARCH ON MECHANICAL TURK                                                         5
                                                                                        (2010) found faster completion times between 6am-3pm
                                                                                        GMT (which resulted in a higher proportion of Indian work-
         1000                                                                           ers). Ipeirotis also found that over half of the HIT groups
                                                                                        are completed in 12 hours or less, suggesting a large active
                                                                                        worker pool.
          800
                                                                                           To become a worker, one must create a worker account
                                                                                        on Mechanical Turk and an Amazon Payments account into
                                                                                        which earnings can be deposited. Both of these accounts
                                                                                        merely require an email address and a mailing address. Any
          600
                                                                                        worker, from anywhere in the world, can spend the money
                                                                                        they earn on Mechanical Turk on the Amazon.com website.
 count

                                                                                        As mentioned before, to be able to withdraw their earnings
          400                                                                           as cash, workers must take the additional step of linking
                                                                                        their Payments account to a verifiable U.S. or Indian bank
                                                                                        account. In addition, workers can transfer money between
                                                                                        Amazon’s Payment accounts. While having more than one
          200
                                                                                        account is against Amazon’s Terms of Service, it is possi-
                                                                                        ble, although somewhat tedious, for workers to earn money
                                                                                        using multiple accounts and transfer the earnings to one ac-
            0                                                                           count to either be spent on Amazon.com or withdrawn. Re-
                0   20000   40000   60000   80000   100000   120000   140000   160000
                                                                                        questers who use external HITS (see the Anatomy of a HIT
                                    Income (maximum)                                    section) can guard against multiple submission by the same
Figure 3. Distribution of the maximum of the income interval self-                      worker by using browser cookies and tracking IP addresses
reported by workers.                                                                    as Birnbaum (2004) suggested in the context of general on-
                                                                                        line experiments.
                                                                                           Another important policy forbids workers from using pro-
found that 12% of U.S. workers and 27% of Indian workers                                grams (“bots”) to automatically do work for them. Although
reported that “Mechanical Turk is my primary source of in-                              infringements of this policy appear to be rare (but see Mc-
come” (emphasis ours). Ipeirotis also reported that roughly                             Creadie, Macdonald, & Ounis, 2010), there are also legiti-
30% of both U.S. and Indian workers indicated they were                                 mate workers who could best be described as “spammers.”
currently unemployed or only held a part-time job. At the                               These are individuals who attempt to make as much money
other end of the spectrum, Ross and colleagues asked how                                completing HITs as they can without regard to the instruc-
important money earned on Mechanical Turk was to them:                                  tions or intentions of the requester. These individuals might
12% of U.S. workers and 10% of Indian workers indicated                                 also be hard to discriminate from bots. A favorite target for
that “MTurk money is irrelevant”, implying that the money                               these spammers are surveys, as they can be completed eas-
made through Mechanical Turk is at least relevant to the                                ily and are plentiful on Mechanical Turk. Fortunately, Me-
vast majority of workers. The modal response for both U.S.                              chanical Turk has a built-in reputation system for workers:
and Indian workers was that the money was simply nice and                               every time a requester rejects a worker’s submission it goes
maybe a way to pay for “extras”. Perhaps the best summary                               on their record. Subsequent requesters can then refuse work-
statement of why workers do tasks on Mechanical Turk is the                             ers whose rejection rate exceeds some specified threshold, or
59% of Indian workers and 69% of U.S. workers who agreed                                block specific workers who previously submitted bad work.
that “Mechanical Turk is a fruitful way to spend free time and                          We will revisit this point when we describe methods for en-
get some cash” (Ipeirotis, 2010b). What all of this suggests                            suring data quality.
is that most workers are not trying to scrape together a living
using Mechanical Turk (fewer than 8% report earning more                                Requesters
than $50/week on the site), but rather are caregivers, college
students, and the “bored-at-work” crowd spending free time                                 The requesters who put up the most HITs and groups of
to earn some extra spending money.                                                      HITs on Mechanical Turk are predominantly companies au-
                                                                                        tomating portions of their business or intermediary compa-
   The number of workers available at any given time is not                             nies that post HITs on Mechanical Turk on the behalf of other
directly knowable. However, Panos Ipeirotis has tracked                                 companies (Ipeirotis, 2010a). For example, search compa-
the number of HITs created and available every hour (and                                nies have used Mechanical Turk to verify the relevance of
recently, every minute) over the past year, and has used                                search results, online stores have used it to identify similar or
these statistics to infer the number of HITs being com-                                 identical products from different sellers, and online directo-
pleted (Ipeirotis, 2010a). With this information he has de-                             ries have used it to check the accuracy and “freshness” of list-
termined that there are slight seasonalities with respect to                            ings. In addition, since businesses may not want to or be able
time of day and day of week. Workers tend to be more                                    to interact directly with Mechanical Turk, intermediary com-
abundant between Tuesday and Saturday, and Huang et al.                                 panies have arisen such as Crowdflower (previously Dolores
6                                           BEHAVIORAL RESEARCH ON MECHANICAL TURK

Labs) and Smartsheet.com to help with the process and             The Anatomy of a HIT
guarantee results. As mentioned, Mechanical Turk is also
used by those interested in machine learning, as it provides a       All of the tasks available on Mechanical Turk are listed
fast and cheap way to get labeled data such as tagged images      together on the site in a standardized format that allows the
and spam classifications (for more market-wide statistics of      workers to easily browse, search, and choose between the
Mechanical Turk see Ipeirotis (2010a)).                           jobs being offered. An example of this is shown in Figure 4.
                                                                  Each job posted consists of many Human Intelligence Tasks
   In order to run studies on Mechanical Turk, one must sign      (HITs) of the same “HIT Type”, meaning they all have the
up as a requester. There are two or three accounts required to    same characteristics. Each HIT is displayed with the fol-
register as a requester, depending on how one plans to inter-     lowing information: the title of the HIT, the requester who
face with Mechanical Turk: a requester account, an Amazon         created the HIT, the wage being offered, the number of HITs
Payments Account, and (optionally) an Amazon Web Ser-             of this type available to be worked on, how much time the
vices (AWS) account.                                              requester has allotted for completing the HIT, and when the
   One can sign up for a requester account at https://            HIT expires. By clicking on a link for more information,
requester.mturk.com/mturk/beginsignin3 . It is ad-                the worker can also see a longer description of the HIT, key-
visable to use a unique email address for running experi-         words associated with the HIT, and what qualifications are
ments, preferably one that is associated with the researcher      required to accept the HIT. We elaborate on these qualifica-
or the research group, because workers will interact with         tions later, which restrict who can work on a HIT and some-
the researcher through this account and this email address.       times who can preview it. If the worker is qualified to pre-
Moreover, the workers will come to learn a reputation and         view the HIT, they can click on a link and see the preview,
possibly develop a relationship with this account on the ba-      which typically shows what the HIT will look like when they
sis of the jobs being offered, the money being paid, and on       work on the task (see Figure 5 for an example HIT).
occasion from direct correspondence. Similarly, we recom-            All of this information is determined by the requester
mend using a name that clearly identifies the researcher. This    when creating the HIT, including the qualifications needed
does not have to be the researcher’s actual name (although it     to preview or accept the HIT. A very common qualification
could be), but also should be sufficiently distinctive that the   requires that over 90% of the assignments a worker has com-
workers know who they are working for. For example, the           pleted have been accepted by the requesters. Another com-
requester name “University of Copenhagen” could refer to          mon type of requirement is to specify that workers must re-
many research groups, and workers might be unclear about          side in a specific country. Requesters can also design their
who is actually doing the research; the name “Perception Lab      own qualifications. For example, a requester could require
at U. Copenhagen” would be better.                                the workers to complete some practice items and correctly
   To register as a requester one must also create an             answer questions about the task as a prerequisite to working
Amazon Payments account (https://payments.amazon                  on the actual assignments. More than one of these qualifica-
.com/sdui/sdui/getstarted) with the same account de-              tions can be combined for a given HIT.
tails provided for the requester account. At this point a            Another parameter the requester can set when creating a
funding source is required, which can be either a U.S.            HIT is how many “assignments” each HIT has. A single HIT
credit card or a U.S. bank account. Finally, if one in-           can be made up of one or more assignments, and a worker
tends to interact with Mechanical Turk programatically            can only do one assignment of a HIT. For example, if the HIT
one must also create an Amazon Web Services (AWS)                 were a survey and the requester only wanted each worker to
Account at https://aws-portal.amazon.com/gp/aws/                  do the survey once, they would make one HIT with many
developer/registration/index.html. This provides                  assignments. As another example, if the task was labeling
one with the unique digital keys necessary to interact with the   images and the requester wanted three different workers to
Mechanical Turk Application Programming Interface (API)           label every image (say, for data quality purposes), the re-
(discussed in detail in the Programming Interfaces section of     quester would make as many HITs as there are images to be
the Appendix).                                                    labeled and each HIT would have 3 assignments.
                                                                     When browsing for tasks, there are several criteria the
   Although Amazon provides a built-in mechanism for              workers can use to sort the available jobs: how recently the
tracking the reputation of the workers, there is no corre-        HIT was created, the wage offered per HIT, the total number
sponding mechanism for the requesters. As a result, one           of available HITs, how much time the requester allotted to
might imagine unscrupulous requesters could refuse to pay         complete each HIT, the title (alphabetical), and how soon the
their workers irrespective of the quality of their work. In       HIT expires. Chilton, Horton, Miller, and Azenkot (2010)
such a case there are two recourses for the aggrieved work-       show that the criterion most frequently used to find HITs
ers. One recourse is to report this to Amazon. If repeated        is the “recency” of the HIT (when it was created), and this
offenses occurred, the requester would be banned. Second,         has led some to periodically add available HITs to the job in
there are websites where workers share experiences and rate       order to make it appear as though the HIT is always fresh.
requesters (see the Turker Community section for more de-         While this undoubtedly works in some cases, Chilton and
tails). Requesters that exploit workers would have an increas-
ingly difficult time getting work done because of these exter-       3
                                                                       The Mechanical Turk website can be difficult to search and nav-
nal reputation mechanisms.                                        igate, so we will provide URLs whenever possible.
BEHAVIORAL RESEARCH ON MECHANICAL TURK                                               7

                                     Figure 4. Screenshot of the Mechanical Turk marketplace.

colleagues also found an outlier group of recent HITs that         designed and set up with the required information. Once the
were rarely worked on—presumably these are the jobs that           requester has created the HIT and is ready to have it worked
are being continually refreshed but are unappealing to the         on, the requester posts the HIT to Mechanical Turk. A re-
workers.                                                           quester can post as many HITs and as many assignments as
   Perhaps surprisingly, the offered wage is not often used        he or she wants, as long as the total amount owed to the work-
for finding HITs, and Chilton and colleagues actually found        ers (plus fees to Amazon) can be covered by the balance of
a slight negative relationship at the highest wages between        the requester’s Amazon Payments account.
the probability of a HIT being worked on and the wage of-              Once the HIT has been created and posted to Mechanical
fered. This finding is reasonably explained by unscrupulous        Turk, workers can see it in the listings of HITs and choose to
requesters using high wages as a bait for naive workers—           accept the task. Each worker then does the work and submits
which is corroborated by the finding that higher paying HITs       the assignment. After the assignment is complete, requesters
are more likely to be worked on, once the top 60 highest-          review the work submitted and can accept or reject any or all
paying HITs have been excluded.                                    of the assignments. When the work is accepted, the base pay
                                                                   is taken from the requesters account and put into the workers
   Internal or External HITs. Requesters can create HITs in        account. Requesters can also at this point grant bonuses to
two different ways, as internal or external HITs. An internal      workers. Amazon charges the requesters 10% of the total
HIT uses templates offered by Amazon, in which the task and        pay granted (base pay plus bonus) as a service fee, with a
all of the data collection is done on Amazon’s servers. The        minimum of $0.005 per HIT.
advantage of these types of HITs is that they can be gener-
ated very quickly and the most one needs to know to build              If there are more HITs of the same type to work on after
them is HTML programming. The drawback is that they                the workers complete an assignment, they are offered the op-
are limited to be single-page HTML forms. In an external           portunity to work on another HIT of the same type. There
HIT the task and data are kept on the requester’s server and       is even an option to automatically accept HITs of the same
provided to the workers through a frame on the Mechanical          type after completing one HIT. Most HITs have some kind
Turk site, which has the benefit that the requester can design     of initial time cost for learning how to do the task correctly,
the HIT to do anything he or she is capable of programming.        and so it is to the advantage of workers to look for tasks with
The drawback is that one needs access to an external server        many HITs available. In fact, Chilton et al. (2010) found that
and possibly more advanced programming skills. In either           the second-most-frequently used criterion for sorting is the
case, there is no explicit cue that the workers can use to dif-    number of HITs offered, as workers look for tasks where the
ferentiate between internal and external HITs, so there is no      investment in the initial overhead will pay off with lots of
difference from the workers perspective.                           work to be done.
                                                                       The HIT will be completed and disappear from the list
  Lifecycle of HIT. The standard process for HITs on Ama-          on Mechanical Turk when either of two things occur: all of
zon’s Mechanical Turk begins with the creation of the HIT,         the assignments for the HIT have been submitted, or the HIT
8                                           BEHAVIORAL RESEARCH ON MECHANICAL TURK

                                   Figure 5. Screenshot of an example image classification HIT.

expires. As a reminder, both the number of assignments that        ted poor work or has otherwise tried to illicitly get money
make up the HIT and the expiration time are defined by the         from the requester.
requester when the HIT is created. Also, both of these val-
ues can be increased by the requester while the HIT is still       Improving HIT Efficiency
running.
                                                                      How much to Pay. One of the first questions asked by
   Reviewing Work. Requesters should try to be as fair as          new requesters on Mechanical Turk is how much to pay for
possible when judging which work to accept and reject. If a        a task. Often, researchers come with the prior expectation of
requester is viewed as unfair by the worker population, that       laboratory subjects, who typically cost somewhat more than
requester will likely have a difficult time recruiting workers     the current minimum wage. However, recent research on the
in the future. Many HITs require the workers to have an ap-        behavior of workers (Chilton et al., 2010) demonstrated that
proval rating above a specified threshold, so unfairly reject-     workers had a reservation wage (the least amount of pay for
ing work can result in workers being prevented from doing          which they would do the task) of only $1.38 per hour, with an
other work. Most importantly, whenever possible requesters         average effective hourly wage of $4.80 for workers (Ipeirotis,
should be clear in the instructions of the HIT about the crite-    2010a).
ria on which work will be accepted or rejected.                       There are very good reasons for paying more in lab ex-
   One typical criterion for rejecting a HIT is if it disagrees    periments than on Mechanical Turk. Participating in a lab-
with the majority response or is a significant outlier (Dixon,     based experiment requires aligning schedules with the exper-
1953). For example, consider a task where workers classify         imenter, travel to and from the lab, and the effort required to
a post from Twitter as spam or not spam. If four workers           participate. On Mechanical Turk, the effort to participate is
rate the post as spam and one rates it as not spam, then this      much lower since there are no travel costs, and it is always on
may be considered valid grounds for rejecting the minority         the worker’s schedule. Moreover, because so many workers
opinion. In the case of surveys and other tasks, a requester       are using AMT as a source of extra income using free time,
may reject work that is done faster than a human could have        many are willing to accept lower wages than they might oth-
possibly done the task. Requesters also have the option of         erwise. Others have argued that because of the necessity for
blocking workers from doing their HIT. This extreme mea-           redundancy in collecting data (to avoid spammers and bad
sure should only be taken if a worker has repeatedly submit-       workers), the wage that might otherwise go to a single worker
BEHAVIORAL RESEARCH ON MECHANICAL TURK                                                9
is split amongst the redundant workers 4 . We discuss some of      EST). The $0.05 version was posted on August 13, 2010,
the ethical arguments around the wages on Mechanical Turk          the $0.03 version was posted on August 27, 2010, and the
in the Ethics and Privacy section.                                 $0.01 version was posted on September 17, 2010. We held
    A concern that is often raised is that lower pay leads to      the time and day of week constant because as mentioned ear-
lower quality work. However, there is evidence that for at         lier, both have shown to have seasonality trends (Ipeirotis,
least some kinds of tasks there seems to be little to no effect    2010a). Figure 6 shows the results of this experiment. The
of wage on the quality of work obtained (Mason & Watts,            response rate for the $0.01 survey was much slower than the
2009; Marge et al., 2010). Mason and Watts (2009) used             $0.03 and $0.05 versions which had very similar response
two tasks in which they manipulated the wage earned on             rates. While this is not a completely controlled study and
Mechanical Turk, while simultaneously measuring the quan-          is just meant for illustrative purposes, Buhrmester et al. (in
tity and quality of work done. In the first study, they found      press) and Huang et al. (2010) found similar increases in
that the number of tasks completed increased with greater          completion time with greater wages. Looking across these
wages (from $0.01 to $0.10), but no difference on the quality      studies one could conclude the relationship between wage
of work. In the second study, they found a participants did        and completion time is positive but non-linear.
more tasks when they received pay than when they received
no pay per task, but saw no effect of actual wage on quantity         Attrition. Attrition is a bigger concern in online experi-
or quality of the work.                                            ments than in laboratory experiments. While it is possible
    These results are consistent with the findings from the        for subjects in the lab to simply walk out of an experiment,
survey paper of Camerer and Hogarth (1999) which shows             this happens relatively rarely, presumably because of the so-
that for most economically motivated experiments varying           cial pressure the subjects might feel to participate. In the on-
the size of the incentives has little to no effect. This survey    line setting, however, user attrition can come from a variety
paper does, however indicate that there are classes of experi-     of sources. A worker could simply open up a new browser
ments, such as those based on judgments and decisions (e.g.        window and stop paying attention to the experiment at hand,
problem solving, item recognition/recall, and clerical tasks)      or they could walk away from their computers in the middle
where the incentive scheme has an effect on performance. In        of an experiment, or a user’s web browser or entire machine
these cases, however, there is usually a change in behavior        could crash, or their internet connectivity could cut out.
going from paying zero to some low amount, and little to
                                                                      One technique for reducing attrition online experiments
no change in going from a low amount to a higher amount.
                                                                   involves asking participants how serious they are about com-
Thus the norm on Mechanical Turk of paying less than one
                                                                   pleting the experiment and dropping the data from those
would typically pay laboratory subjects should not impact
                                                                   whose seriousness is below a threshold (Musch & Klauer,
large classes of experiments.
                                                                   2002). Other techniques involve putting anything that might
    Consequently, it is often advisable to start by paying less    cause attrition, like legal text and demographic questions, at
than the expected reservation wage, and then increasing the        the beginning of the experiment. Thus participants are more
wage if the rate of work is too low. Also, one way to increase     likely to dropout during this phase than during the data gath-
the incentive to participants without drastically increasing       ering phase (see (Reips, 2002) and follow up work by (Göritz
the cost to the requester is to offer a lottery to participants.   & Stieger, 2008)). Reips (2002) also suggests using the most
This has been done in other online contexts (Göritz, 2008).       basic and widely available technology in an online experi-
It is worth noting that requesters can post HITs that pay noth-    ment to avoid attrition due to software incompatibility.
ing, though these are rare and unlikely to be worked on un-
less there is some additional motivation (e.g., benefiting a
charity). In fact, previous work has shown that offering par-                       Conducting Studies on
ticipants financial incentives increases both the response and                        Mechanical Turk
retention rates of online surveys relative to not offering any
financial incentive (Göritz, 2006; Frick, Bächtiger, & Reips,
                                                                      In the following sections we show how to conduct re-
2001).
                                                                   search on Mechanical Turk for three broad classes of stud-
                                                                   ies. Depending on the specifics of the study being conducted,
   Time to Completion. The second most often asked ques-
                                                                   experiments on Mechanical Turk can fall anywhere on the
tion is how quickly work is completed. Of course, the an-
                                                                   spectrum between laboratory experiments and field experi-
swer to the question depends greatly on many different fac-
                                                                   ments. We will see examples of experiments that could have
tors: how much the HIT pays, how long each HIT takes, how
                                                                   been done in the lab but were put on Mechanical Turk. We
many HITs are posted, how enjoyable the task is, the repu-
                                                                   will also see examples of what amount to online field exper-
tation of the requester, etc. To illustrate the effect of one of
                                                                   iments. Throughout this section we outline the general con-
these variables, the wage of the HIT, we posted three differ-
                                                                   cepts that are unique to doing experiments on Mechanical
ent six-question multiple-choice surveys. Each survey was
                                                                   Turk and elaborate on the technical details in the Appendix.
one HIT with 500 assignments. We posted the surveys on
different days so that we would not have two surveys on the
site at the same time. But we did post them on the same day           4
                                                                      http://behind-the-enemy-lines.blogspot.com/
of the week (Friday) and at the same time of day (12:45pm          2010/07/mechanical-turk-low-wages-and-market.html
10                                                           BEHAVIORAL RESEARCH ON MECHANICAL TURK

                     100
                                                                                      form where he or she defines all the values for the various
                                                                                      properties of the HIT such as the number of assignments, pay
                                                                                      rate, title and description (see the Appendix for a description
                      80                                                              of all of the parameters of a HIT). After specifying the prop-
                                                                                      erties for the HIT, the requester then creates the HTML for
                                                                                      the HIT. In the HTML the requester specifies the type of in-
 Percent Completed

                      60
                                                                        Wage          put and content for each input type (e.g., survey question),
                                                                               0.01
                                                                               0.03
                                                                                      and for multiple choice questions, the value for each choice.
                      40                                                       0.05   The results are given back to the requester in a column sep-
                                                                                      arated file (.csv). There is one row for each worker and one
                      20
                                                                                      column for each question, where the worker’s response is in
                                                                                      the corresponding cell. Requesters are allowed to preview
                                                                                      the modified template to ensure that there are no problems
                                                                                      with the layout.
                           5            10              15       20
                               Days from HIT creation
                                                                                         Aside from standard HTML, HIT templates can also in-
                                                                                      clude variables that can have different values for each HIT
Figure 6. Response rate for three different six question multiple
                                                                                      which Mechanical Turk fills in when a worker previews the
choice surveys conducted with different pay rates.
                                                                                      HIT. For example, suppose one did a simple survey template
                                                                                      that asked one question: What is your favorite ${object}?
                                                                                      Here ${object} is a variable. When designing the HIT, a re-
Surveys                                                                               quester could instantiate this variable with a variety of values
                                                                                      by uploading a .csv file with ${object} as the first column
   Surveys conducted on Mechanical Turk share the                                     and all the values in the rows below. For example, a requester
same advantages and disadvantages as any online sur-                                  could put in values of: color, restaurant, and song. If done
vey (Andrews, Nonnecke, & Preece, 2003; Couper, 2000).                                this way, three HITs would be created, one for each of these
The issues surrounding online survey methodologies have                               values. Each one of these HITs would have ${object} re-
been studied extensively, including a special issue of Public                         placed with color, restaurant, and song. Each of these HITs
Opinion Quarterly devoted exclusively to the topic (Couper                            would have the same number of assignments as specified in
& Miller, 2008). The biggest disadvantage to conducting sur-                          the HIT template.
veys online is that the population is not representative of any                          Another way to build a survey on Mechanical Turk is to
geographic area, and Mechanical Turk is not even particu-                             use an external HIT, which requires you to host the survey
larly representative of the online population. Methods have                           on your own server or use an outside service. This has the
been suggested for correcting these selection biases in sur-                          benefit of increased control over the content and aesthetics
veys generally (Heckman, 1979; Berk, 1983), and the appro-                            of the survey as well as allowing one to have multiple pages
priate way to do this on Mechanical Turk is an open question.                         in a survey and generally more control over the form of the
Thus, as with any sample whether it be online or offline, re-                         survey. We will discuss external HITs more in the next few
searchers must decide for themselves if the subject pool on                           sections.
Mechanical Turk is appropriate for their work.                                           It is also possible to integrate online survey tools such as
   However, as a tool to conduct pilot surveys, or for surveys                        SurveyMonkey and Zoomerang with Mechanical Turk. One
that do not depend on generalizability, Mechanical Turk can                           may want to do this instead of simply creating the survey
be a convenient platform to construct surveys and collect re-                         within Mechanical Turk if one has already created a long
sponses. As mentioned in the Introduction, relative to other                          survey using one of these tools and would simply like to re-
methodologies Mechanical Turk is very fast and inexpensive.                           cruit participants through Mechanical Turk. To integrate with
However, this benefit comes with a cost: the need to val-                             a pre-made survey on another site, one would create a HIT
idate the responses to filter out bots and workers who are                            that presents the worker with a unique identifier, for example
not attending to the purpose of the survey. Fortunately, val-                         by generating a random string with JavaScript, a link to the
idating responses can be managed in several relatively time-                          survey, and a submit button. In the survey, one would include
and cost-effective ways, as outlined in the Quality Assurance                         a text field for the worker to enter their unique identifier. The
section. Moreover, because workers on Mechanical Turk                                 requester would then know to only approve the HITs that
are typically paid after completing the survey, they are more                         have a survey with a matching unique identifier.
likely to finish it once they start (Göritz, 2006).
   Amazon provides a HIT template to aid in the construction                          Random Assignment
of surveys (Amazon also provides other templates which we
discuss in the HIT Templates section of the Appendix). Us-                               The cornerstone of most experimental designs is random
ing a template means that the HIT will run on an Amazon                               assignment of participants to different conditions. The key
machine. Amazon will store the data from the HIT and the                              to random assignment on Mechanical Turk is ensuring that
requester can retrieve the data at any point in the HIT’s life-                       every time the study is done, it is done by a new worker.
cycle. The HIT template gives the requester a simple web                              Although it is possible to have multiple accounts (see the
BEHAVIORAL RESEARCH ON MECHANICAL TURK                                              11
Workers section), it is against Amazon’s policy, so random        the study is built so that this mapping is checked when a
assignment to unique Worker IDs is a close approximation to       worker accepts the HIT, the experimenter can be sure that
uniquely assigning individuals to conditions. Additionally,       each worker only experiences a single condition. Another
tracking worker IP addresses and using browser cookies can        option is to simply refuse entry to workers who have already
help ensure unique workers (Reips, 2000).                         done the experiment. In this case requesters must clearly in-
   One way to do random assignment on Mechanical Turk is          dicate in the instructions that workers will only be allowed to
to use the templates and JavaScript to replace key statements,    do the experiment once.
images, etc. according to the condition assignment. The main         Mapping the Worker ID to the condition assignment does
drawback to this method is that the functionality is limited to   not, of course, rule out the possibility that the workers will
what can be accomplished with a single page of basic HTML.        discuss their condition assignments. As we discuss in the
Of course, there are still interesting studies that can be done   Turker Community section, workers are most likely to com-
this way—for instance, Solmon Asch’s original study on im-        municate about the HITs on which they worked in the online
pression formation (Asch, 1940). However, the real strength       forums focused on Mechanical Turk. It is possible that these
of conducting studies on Mechanical Turk comes from the           conversations will include information about their condition
ability to implement any study that can be conducted online       assignments, and there is no way to prevent participants from
quickly and at low cost in one place.                             communicating. This can also be an issue in general online
   Another way to do random assignment on Mechanical              experiments and in multi-session offline experiments. Me-
Turk is to create external HITs, which allows one to host         chanical Turk has the benefit that these conversations on the
any web-based content within a frame on Amazon’s Me-              forums can be monitored by the experimenter.
chanical Turk. This means that any functionality one can             When using these methods, the preview page must be de-
have with web-based experiments—including setups based            signed to be consistent with all possible condition assign-
on JavaScript, PHP, Adobe Flash, etc.—can be done on Me-          ments. For instance, Mason and Watts (2009) randomized
chanical Turk. There are three vital components to random         the pay the participants received. Because the wage of-
assignment with external HITs. First, the URL of the landing      fered per HIT is visible before the worker even previews the
page of the study must be included in the parameters for the      HIT, the different wage conditions had to be done through
external HIT so Mechanical Turk will know where the code          bonuses, and could not be revealed until after the participant
for the experiment resides. Second, the code for the experi-      had accepted the HIT.
ment must capture three variables passed to it from Amazon           Finally, for many studies it is important to calculate and
when a worker accepts the HIT: the “HITId”, “WorkerId”            report intent-to-treat effects. Imagine a laboratory study that
and “AssignmentId”. Finally, the experiment must provide          measures the effect of blaring noises on reading comprehen-
a “submit” button that sends the Assignment ID (along with        sion that finds the counter-intuitive result that the noises im-
any other data) back to Amazon (using the externalSubmit          prove comprehension. This result could be explained by the
URL, as described in the Appendix).                               fact that there was a higher drop-out rate in the “noises” con-
   For a web-based study that is being hosted on an exter-        dition, and the remainder either had superior concentration
nal server but delivered on Mechanical Turk, there are a few      or were deaf and therefore unaffected. In the context of Me-
ways to ensure participants are being assigned to only one        chanical Turk, one should be sure to keep records of how
condition. The first way is to post a single HIT with multiple    many people accepted and how many completed the HIT in
assignments. In this way, Mechanical Turk ensures that each       each condition.
assignment is completed by a different worker—each worker
will only see one HIT available. Because every run through        Synchronous Experiments
the study is done by a different person, random assignment            Many experimental designs have the property that one
can be accomplished by ensuring the study generates a ran-        subject’s actions can affect the experience and possibly the
dom condition every time a worker accepts a HIT.                  payment of another subject. Yet Mechanical Turk was de-
   While this method is relatively easy to accomplish, it can     signed for tasks that are asynchronous in nature, in which
run into problems. The first arises when one has to re-run an     the work can be split up and worked on in parallel. Thus,
experiment. There is no built-in way to ensure that a worker      it is not a priori clear how one could conduct these types
who has already completed a HIT won’t be able to return           of experiments on Mechanical Turk. In this section we de-
the next time a HIT is posted and complete it again, receiv-      scribe one way synchronous participation can be achieved,
ing a different condition assignment the second time around.      by building a subject panel, notifying the panel of upcom-
Partially this can be dealt with by careful planning and test-    ing experiments, providing a “waiting room” for queuing
ing, but some experimental designs may need to be repeated        participants, and handling attrition during the experiment.
multiple times while ensuring participants are receiving the      The methods discussed here have been used successfully by
same condition each time. A simple but more expensive way         Suri and Watts (2011) and Mason and Watts (unpublished) in
to deal with repeat workers is to allow all workers to com-       over 150 experimental sessions combined as well as by Mao,
plete the HIT multiple times and disregard subsequent sub-        Parkes, Procaccia, and Zhang (2011).
missions. A more cost-effective way is to store the mapping
between a Worker ID (passed to the site when the worker             Building the Panel. A very important part of running
accepts the HIT) and that worker’s assigned condition. If         synchronous experiments on Mechanical Turk is building
You can also read