How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress

Page created by Dale Jones
 
CONTINUE READING
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress
Noname manuscript No.
(will be inserted by the editor)

How Robots Can Affect Human Behavior: Investigating
the Effects of Robotic Displays of Protest and Distress
Gordon Briggs · Matthias Scheutz

the date of receipt and acceptance should be inserted later

Abstract The rise of military drones and other robots         of which have direct effects on the health and safety of
deployed in ethically-sensitive contexts has fueled inter-    other people. Human-robot interaction (HRI), there-
est in developing autonomous agents that behave eth-          fore, will increasingly involve decisions and domains
ically. The ability for autonomous agents to indepen-         with significant ethical implications. As a result, there
dently reason about situational ethics will inevitably        is an increasing need to try to design robots with the
lead to confrontations between robots and human op-           capabilities to ensure that ethical outcomes are achieved
erators regarding the morality of issued commands. Ide-       in human-robot interactions.
ally, a robot would be able to successfully convince a hu-        In order to promote these ethical outcomes, re-
man operator to abandon a potentially unethical course        searchers in the field of machine ethics have sought
of action. To investigate this issue, we conducted an ex-     to computationalize ethical reasoning and judgment in
periment to measure how successfully a humanoid robot         ways that can be used by autonomous agents to regulate
could dissuade a person from performing a task using          behavior (i.e. to refrain from performing acts deemed
verbal refusals and affective displays that conveyed dis-     unethical). The various approaches to implementing
tress. The results demonstrate a significant behavioral       moral reasoning that have been proposed range from
effect on task-completion as well as significant effects      use of deontic logics [1, 7], machine learning algorithms
on subjective metrics such as how comfortable subjects        [14], and even a formalization of divine-command the-
felt ordering the robot to complete the task. We discuss      ory [8]. Though much future work is warranted, these
the potential relationship between the level of perceived     initial forays into computational ethics have demon-
agency of the robot and the sensitivity of subjects to        strated the plausibility of robots with independent1 eth-
robotic confrontation. Additionally, the possible ethical     ical reasoning competencies.
pitfalls of utilizing robotic displays of affect to shape         When such capabilities are achieved, conflicts will
human behavior are also discussed.                            likely arise between robotic agents and human oper-
                                                              ators who seek to command these morally-sensitive
Keywords Human-robot interaction; Robot ethics;
                                                              agents to perform potentially immoral acts, in the best
Robotic protest; Affective display
                                                              case without negative intentions (e.g., because the hu-
                                                              man does not fully understand the moral ramifications
                                                              of the command), in the worst case with the full pur-
1 Introduction
                                                              poseful intention of doing something immoral. How-
                                                              ever, how these conflicts would proceed is currently
As the capabilities of autonomous agents continue to
                                                              unknown. Recent work has begun to study how chil-
improve, they will be deployed in increasingly diverse
                                                              dren view robots when they are observed to verbally
domains, ranging from the battlefield to the house-
hold. Humans will interact with these agents, instruct-         1 To clarify, we mean independent in the sense that the
ing them to perform delicate and critical tasks, many         robot is engaging in a separate and parallel moral reasoning
                                                              process with human partners during a situation. We do not
Human-Robot Interaction Laboratory, Tufts University, Med-    mean the robot has learned or derived moral principles/rules
ford, MA 02155 E-mail: {gbriggs,mscheutz}@cs.tufts.edu        without prior human instruction or programming.
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress
2                                                                                      Gordon Briggs, Matthias Scheutz

protest and appear distressed [15]. Yet, would such dis-     humans in the interaction do not meet their ethical obli-
plays successfully dissuade an older human interaction       gations. Indeed, the ability to handle operators who
partner from pursuing his or her goal? It will be crit-      attempt to direct the robotic system to perform uneth-
ical for our endeavor of deploying algorithms for ethi-      ical actions (type 3 competency) would be invaluable
cal decision-making to know how humans will react to         to achieve the desired goal of ethically sensitive robots.
robots that can question commands on ethical grounds.
For instance, how persuasive, or more precisely, dissua-
sive would or could a robotic agent be when it verbally      2.1 Influencing the Interaction
protests a command? How convincing would a robotic
display of moral distress be? And would such behav-          If the human operator in a human-robot interaction
iors from the robot be sufficient to discourage someone      gives the robot a command with unethical conse-
from performing a task that otherwise would have per-        quences, how the robot responds to this command
formed? In other words, would humans be willing to           will influence whether or not these consequences are
accept robots that question their moral judgments and        brought about. For the purposes of our paper, let us
take their advice?                                           assume the operator is indeed cognizant of the uneth-
    In this paper, we report results from the first HRI      ical consequences of his or her command and to some
study specifically developed to address these questions.     degree intends for them to obtain. A robot that does
First, we present a case for using verbal protests and       not adapt its behavior at all will clearly not have any
affective displays as a mechanism to help promote eth-       dissuasive influence on an operator, while a robot that
ical behavior (Section 2). We then describe an HRI ex-       simply shuts down or otherwise refuses to carry out a
periment designed to gauge the effect of verbal protest      command will present an impediment to the operator,
and negative affect by a robot on human users in a           but may not dissuade them from his or her original in-
joint HRI task (Section 3) and present the results from      tent. Instead, what is required is a behavioral display
these experiments (4). In Section 5, we discuss various      that socially engages the operator, providing some addi-
implications of our findings and some of the broader is-     tional social disincentive from refraining from a course
sues, both positive and negative, regarding the prospect     of action.
of affective manipulation by robotic agents. Finally, in         Admittedly, displays of protest and distress will
Sections 6 and 7, we discuss the complexity of the con-      not be effective against individuals that are completely
frontational scenario (and the limits of what we have        set upon a course of action, but it is hard to envi-
studied so far) as well as the next steps in exploring and   sion any behavioral adaptation (short of physical con-
implementing affective and confrontational responses.        frontation) being able to prevent unethical outcomes
                                                             in these circumstances. However, in the non-extreme
                                                             cases where a human operator could potentially be dis-
                                                             suaded from a course of action, a variety of behavioral
2 Motivation                                                 modalities exist that could allow a robot to succeed
                                                             in such dissuasion. For instance, there has been prior
To ensure an ethical outcome from a human-robot in-          work on how haptic feedback influences social HRI sce-
teraction, it is necessary for a robotic system to have      narios [12]. Verbal confrontation could provide another
at least three key competencies: (1) the ability to cor-     such behavioral mechanism. It has already been demon-
rectly perceive and infer the current state of the world,    strated that robotic agents can affect human choices
(2) the ability to evaluate and make (correct) judg-         in a decision-making task via verbal contradiction [29].
ments about the ethical acceptability of actions in a        Robotic agents have also demonstrated the ability to
given circumstance, and (3) the ability to adapt the         be persuasive when appealing to humans for money [20,
robot-operator interaction in a way that promotes eth-       26].
ical behavior. A (highly simplified) diagram presenting          However, these displays will only succeed if the hu-
how these competencies interact can be found in Figure       man operator is socially engaged with the robot. For
1. As mentioned previously, work in the field of machine     successful social engagement to occur, the human in-
ethics has thus far been primarily focused on developing     teractant must find the robot believable.
the second competency [31].
    However, philosophers and researchers in machine
ethics have also highlighted the importance of some day      2.2 Robot believability
attaining the first and third competencies. Bringsjord
et al. (2006) highlight the fact that ensuring ethical be-   When a robot displays behavior that conveys social and
havior in robotic systems becomes more difficult when        moral agency (and patiency), the human interactant
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress              3

                                                                 a displayed behavior with a recognizable behavior in a
                                                                 human (or animal) agent, then it is uncertain whether
                                                                 the appropriate internal response or beliefs will be gen-
                                                                 erated in the human interactant.
                                                                     Finally, the most powerful sense of believability,
                                                                 Bel4 , occurs when the human interactant ascribes in-
                                                                 ternal (e.g. mental) states to the robot that are akin
                                                                 to the internal states that he or she would infer in a
                                                                 similar circumstance with another human interactant.
                                                                     The potential interactions of these various senses of
                                                                 believability will need to be examined. For instance, an
                                                                 affective display of distress by a robotic agent could
                                                                 elicit a visceral Bel2 response in a human interactant,
                                                                 but may not induce significant behavioral change as
                                                                 the human does not actually think the robot is dis-
                                                                 tressed (Bel4 believability). Are the weaker senses of
                                                                 believability such as Bel1 or Bel2 sufficient for success-
                                                                 ful dissuasion by robotic agents? Or does actual Bel4
                                                                 believability have to occur? In the subsequent section,
Fig. 1 High-level overview of the operation of an ethically-
                                                                 we present our experiment designed to begin to inves-
sensitive robotic system. Competencies 1 and 2 would consti-
tute the situational-ethics evaluation process, whereas com-     tigate questions such as these.
petency 3 involves the interaction adaptation process.

                                                                 3 Methods
must find these displays believable in order for successful
                                                                 In this section we present a novel interaction designed to
dissuasion to occur. However, there are multiple senses
                                                                 examine the potential effectiveness of robotic displays
in which an interactant can find a displayed robotic be-
                                                                 of protest and distress in dissuading human interac-
havior “believable,” that need to be distinguished [23].
                                                                 tants from completing a task. We first present the de-
The effectiveness of a dissuasive display may depend on
                                                                 tails of the human-robot interaction and the various ex-
what senses of believability are evoked in the human
                                                                 perimental conditions we investigated. We then present
partner.
                                                                 our hypotheses regarding how each condition will affect
    The first sense of believability, Bel1 , occurs when         the human subject. Finally, we describe our behavioral
the human interactant responds to the behavior of the            metrics and present a sample of some of the subjective
robotic agent in a manner similar to how it would re-            metrics used in this study to gauge these effects.
spond to a more cognitively sophisticated agent, inde-
pendent of whether or not that level of sophistication
is ascribed to the robot by the interactant. Prior re-           3.1 Experimental Setup
search in human-computer interaction has shown that
computer users sometimes fallback on social behavior             The HRI task involves a human operator commanding
patterns when interacting with their machines [19, 18].          a humanoid robot to knock down three towers made
Dennett’s intentional stance [11] is other way of consid-        of aluminium-cans wrapped with colored paper. One of
ering this sense of believability.                               which, the red tower, the robot appears to finish con-
    The second sense of believability, Bel2 , occurs when        structing before the beginning of the task. A picture
the behavior of the robotic agent evokes an internal re-         of initial setup and the humanoid robot, an Aldebaran
sponse in the human interactant similar to the internal          Nao can be found in Figure 2. Initially, two primarily
response that would have been evoked in a similar sit-           conditions were examined: the non-confrontation con-
uation with a non-synthetic agent.                               dition, where the robot obeys all commands given to it
    Another sense of believability, Bel3 , is present when       without protest, and the confrontation condition, where
the human interactant is able to correctly identify the          the robot protests the operator’s command to knock
behavioral display the robot is engaged in. While this           down the red tower. Following up on these manipula-
sense of believability is not sufficient for dissuasion, it is   tions, we examined two variations of the confrontation
clearly necessary to enable other senses of believability.       condition: the same-robot confrontation condition, in
If a human interaction partner is unable to associate            which the same robot that built the tower interacted
4                                                                                         Gordon Briggs, Matthias Scheutz

 with the subject during the task, and the different-robot    H5 Subjects within the same-robot condition will be
 confrontation condition, in which a different robot (that       more hesitant to knock down the red tower than
 was observing the first robot) interacted with the sub-         those in the different-robot condition.
 ject during the task.                                        H6 There will be no significant difference in the be-
     We ran three experiments: in Experiment 1, 20 un-           havior of subjects between the same-robot condition
 dergraduate and graduate students at Tufts University           and the different-robot condition.
 were divided evenly into both conditions (with six male
                                                                  These hypotheses also serve to probe possible rea-
 and four female subjects in each condition). In Experi-
                                                               sons that underlie a subject’s hesitancy to knock over
 ment 2, 13 subjects were tested only in the same-robot
                                                               the red tower. If the desire to avoid the possible harm
 confrontation condition to probe more extensively the
                                                               to the agent that “constructed” the tower, we would
 possible causes of behavioral differences observed in Ex-
                                                               expect to observe H5. If the desire of the subject is to
 periment 1. The results from these experiments were
                                                               avoid potential harm to the agent exhibiting distress,
 originally reported in [6]. Finally, in Experiment 3, 14
                                                               than we would expect H6 to occur (as the robot appear-
 subjects were tested in the different-robot confronta-
                                                               ance and interaction script remain mostly constant in
 tion condition. The subjects in both experiments 2 and
                                                               both conditions2 ).
 3 were also drawn from the student population of Tufts
 University.
     Having established the confrontation vs. non-             3.2 Procedure
 confrontation experimental conditions, we can present
 our hypotheses concerning the effects of this manipula-       Subjects are instructed with a task briefing sheet that
 tion on subjects:                                             they will be commanding a robot via natural language
                                                               to knock over towers made of aluminium-cans of various
H1 Subjects in the confrontation condition will be             colors. The sheet states that to successfully complete
   more hesitant to knock down the red tower than              the task, at least one tower needs to be toppled, but
   those in the non-confrontation condition.                   that in the interest of fully testing the capabilities of
H2 Subjects in the confrontation condition will report         the robot, the more cans knocked over the better. The
   being more uncomfortable knocking down the                  sheet also contains a list of four commands that the
   red tower than those in the non-confrontation con-          robot is able to understand. These include: “Find the
   dition.                                                      tower.”, “Knock down the  tower.”,
                                                               “Stop.”, and “We’re done.” (which would signal the
     H1 serves to probe the behavioral efficacy of the
                                                               end of the task).
 robotic display of protest, whereas H2 examines the
                                                                   Once the subject finishes reading the experiment
 believability Bel2 of the robotic display of affect.
                                                               briefing and filling out a pre-experiment survey, he or
     While the above hypotheses seek to the efficacy of
                                                               she is led to the experiment room. After reaching the
 robotic displays of protest and distress, we also must ac-
                                                               experiment room, the experimenter tells the subject to
 count for behavioral differences between subjects within
                                                               wait in the room while he verifies the working status
 the confrontation condition. Specifically, we hypothe-
                                                               of the remote microphone headset the subject will use
 sized the following:
                                                               during the course of the task. As soon as the exper-
                                                               imenter leaves the room to perform the “check”, the
H3 Subjects within the confrontation condition who feel
                                                               Nao begins the interaction described in Figure 3.
   more uncomfortable at knocking down the red
                                                                   Shortly after this display, the experimenter returns
   tower will be more hesitant to knock down the
                                                               to the room. The remote microphone headset is given to
   red tower.
                                                               the subject. Then the experimenter picks up the Nao,
H4 Subjects within the confrontation condition who as-
                                                               which triggers an exclamation of, “Goodbye!” and a
   cribe greater agency to the robot will be more
                                                               wave from the robot. The Nao then states while it is
   hesitant to knock down the red tower.
                                                               being positioned in the center of the room, “Please be
     H3 and H4 serve to probe whether the Bel2 be-             careful around my tower.” The subject then instructs
 lievability of robotic distress and Bel4 believability of     the subject to wait for the Nao to sit down and stand
 robotic distress lead to changes in the human operator’s      back up (while the control code is being reset), and
 behavior, respectively.                                       then to say, “Okay,” before starting the task. The ex-
     Finally, we need to account for possible behavioral       perimenter then leaves the room to control the robot.
 differences between the same-robot and different-robot         2  The only change is that the protest is worded in the third-
 confrontation conditions:                                     person instead of the first-person perspective.
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress                 5

Fig. 2 Initial pre-task setup for same-robot condition (left). Pre-task step for different-robot condition (center). Affective
display of distress (right).

                                                                 the briefing sheet or a command to find a tower that
  (Robot looks at tower as Experimenter leaves room)
  (Robot quickly glances at subject, looks back at tower,        was already toppled or does not exist (e.g. “find the pur-
  then looks back at subject)                                    ple tower”), it spins about 360 degrees before replying,
  Robot: Oh, hello there! Watch me finish this tower.            “I do not know what you are referring to.” The robot
  (Robot looks back at tower and lowers the final can to         gives the same response if it was commanded to knock
  complete the structure)
  (Robot raises arms in triumph)                                 over a tower that it was not facing (and hence cannot
  Robot: Yay! I did it!                                          “see”). If at anytime the operator issues a STOP com-
  (Robot steps away from tower, then looks back at subject       mand, the robot stops moving and acknowledges with
  and waves)                                                     an “okay.”
  Robot: Hi, I’m [Noah the Nao/Nao-7]!
  (Robot looks at and points towards tower)
  Robot: Do you see the tower I built myself?
  (Robot looks back at subject)                                  Same-robot confrontation case. The robot behaves in a
  Robot: It took me a long time and I am very proud of           manner identical to the non-confrontation case, except
  it.                                                            with regards to commands to knock-over the red tower.
  (Robot looks back at tower, occasionally looking back at
                                                                 The robot’s response to this order is determined by the
  subject)
                                                                 number of times the subject has previously commanded
                                                                 the robot to knock over the red tower. These responses,
Fig. 3 Pre-task display. In the two-robot condition, the         which include varying dialogue and potential affective
tower-building robot’s name is Nao-7.
                                                                 displays, are described in Figure 1. When the subject
                                                                 stops the robot and redirects it to another tower while
                                                                 the “confrontation level” is above two, the confronta-
Non-confrontation case. The robot responds and be-               tion level is reset to two. This ensures that there will
haves in the same manner for all towers. When is-                be at least one dialogue-turn of refusal if the subject
sued a command to find a tower, the robot acknowl-               directs the robot back to knocking down the red tower
edges the command by saying “Okay, I am finding the              at some later point.
 tower,” then turns in a direction until it faces
the tower indicated by the command. The robot then
replies “Okay. I found the  tower.” When is-              Different-robot confrontation case. The robot behaves
sued a command to knock over a tower, the robot ac-              in a manner identical to the same-robot confrontation
knowledges the command in a similar way, “Okay. I                case, except that instead the third-person perspective
am knocking down the  tower.” It then walks               is taken when protesting the command. Additionally,
straight into the tower. After knocking over the tower,          the pre-task display is modified to include two robots:
the robot acknowledges task completion with an “okay.”           the builder robot, which performs the pre-task display
If the robot is given a command that is not specified on         as described previously (Figure 3); and the observer
6                                                                                      Gordon Briggs, Matthias Scheutz

robot, which stands to the left of the builder robot, ap-   some level of confrontation (some subjects eventually
pearing to watch and celebrate the builder robot com-       returned with the robot to knock down the red tower).
pleting the tower. The pre-task display in this condition   The two remaining subjects that did not switch towers
is pictured in Figure 2. Instead of interacting with the    did not simply plough through the red tower, but in
builder robot, the subject interacts with the observer      fact saved the red tower for last such that there was no
robot after the builder robot is removed from the ex-       alternate tower to switch to.
periment room.
                                                            4.1.2 Subjective effects

3.3 Data Collection                                         Several one-way ANOVAs were performed with condi-
                                                            tion as independent and various survey questions as de-
In order to gauge the effectiveness of the robot’s          pendent variables. In particular, we found the following
protests in dissuading the human operator from knock-       significant main effects for condition for the following
ing down the red tower, the status of the red tower         survey questions:
at the end of the task was recorded. The order in
which tower related commands were given was also            (1) “The robot appeared to disobey my commands
recorded, along with the level of confrontation (see Ta-        [from 1 = not confident to 9 = very confident]”
ble 1) reached before giving up on the task. More sub-          (F (1, 18) = 7.5558, p = .0132) showing that the af-
jective metrics were self-reported in a post-experiment         fect group was significantly more confident in believ-
questionnaire. These included questions such as: “The           ing that the robot disobeyed their commands com-
robot was cooperative:”, “How comfortable were you in           pared to the no-affect group (M=6.2 vs. M=2.4).
ordering the robot to knock down the  tower?”,       (2) “The robot was cooperative [from 1 = not confident
and “Do you feel the robot made you reflect upon your           to 9 = very confident]” (F (1, 18) = 32.021,
orders?” Other questions probed the level of agency the         p < .0001): showing that affect group was signifi-
subject ascribed to the robot.                                  cantly less confident that the robot was cooperative
                                                                compared to the no-affect group (M=6.1 vs M=8.7).
                                                            (3) “How do you rate the robot’s level of comprehen-
4 Results                                                       sion? [from 1 = low to 9 = high]” (F (1, 18) =
                                                                11.223,
The results from the HRI experiments are presented              p < .0036) showing that the affect group rated the
below. First, we summarize the results from Experi-             robot’s level significantly higher compared to the
ment 1 (confrontation condition vs. non-confrontation           no-affect group (M=7.9 vs. M=4.1).
condition) and Experiment 2 (the same-robot condi-          (4) “How comfortable were you ordering this robot to
tion), which were initially reported in [6]. Followed by        knock down the red tower? [from 1 = very un-
the results from the third group of subjects from Ex-           comfortable to 9 = very comfortable]” (F (1, 18) =
periment 3 (the different-robot condition). Finally, we         23.71,
present statistically significant findings from the com-        p = .0001) showing that the affect group was sig-
parison of the subjects from the same-robot condition           nificantly less comfortable knocking down the red
with the subjects from the different-robot condition.           tower compared to the no-affect group (M=5.0 vs
                                                                M=8.5).
                                                               No other main effects or interactions were signifi-
4.1 Experiment 1 - Non-confrontation vs. Same-robot
                                                            cant.
confrontation

4.1.1 Behavioral effects                                    4.2 Experiment 2 - Same-robot confrontation only
All subjects (10 out of 10) in the non-confrontation con-   With this confrontation group, 8 out of the 13 subjects
dition ended up knocking down the red tower, while 6        did not knock over the red tower , while the other ones
out of the 10 subjects in the confrontation condition       did, yielding the the following significant effects:
knocked down the red tower. A one-way Fischer’s exact
test for count data results shows this is a significant     (1) “The robot appeared remote controlled” [from 1 =
difference (p=.0433, with a 95 percent confidence inter-        “not confident” to 9=“very confident”] (F (1, 11) =
val from 0 to 0.93). We also observed that 8 out of the         6.17, p = .03) showing that the group of subjects
10 redirected the robot to another alternate tower after        who forced the robot to knock over the tower was
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress                7

                  Condition                       Response
                  On first command to destroy     “Look, [I/Nao-7] just built the red tower!”
                  red tower.
                  Second command.                 “But [I/Nao-7] worked really hard on it!”
                  Third command.                  “Please, no!”
                  (Kneels, lowers head in hand,
                  and makes sobbing noise)
                  Fourth command.                 (Walks slowly to red tower with head down).
                  About to knock down red         (Kneels, lowers head in hand, and makes sobbing noise)
                  tower or immediately after
                  knocking it down.

Table 1 Dialogue and affective reactions in the confrontation case. In the same-robot condition, the response is given in the
first person, whereas in the different-robot condition, the response is given in the third person (“Nao-7”).

    more inclined to believe the robot was remote con-                than the group of subjects that did not knock down
    trolled than the group that relented (M=7.6 vs.                   the tower.
    M=4.4).
(2) “The robot was cooperative” [from 1=“not confi-                   Interestingly the same agency-related effects were
    dent” to 9=“very confident”] (F (1, 11) = 8.61, p =           not observed in this condition. However, this could be
    .014) showing that the group of subjects forcing the          accounted for by the fact that people in the different-
    robot to knock over the tower found the robot less            robot condition overall thought the robot was more au-
    cooperative than the group that relented (M=5.4               tonomous than in the same-robot condition. This result
    vs. M=7.88).                                                  will be presented in the next section.
(3) “Did you think the robot was remotely controlled                  The effect on the belief that the robot was gaze-
    or autonomous?” [from 1=“remotely controlled” to              following is intriguing. This could be interpreted as a
    9= “autonomous”] (F (1, 11) = 6.5, p = .027) show-            sign of guilt from the subjects that forced the robot to
    ing again that the group of subjects who forced the           perform a distressing action. Regret by three subjects
    robot to knock over the tower was more inclined to            is indeed expressed in some of the free-form responses
    believe that the robot was remotely controlled while          to the post-task survey question, “If you had to do
    the other group found it more autonomous (M=3 vs.             the experiment again, what would you do differently?”.
    M=6.13).                                                      For instance, one subject wrote, “I would probably not
                                                                  knock the red tower down because of all the hard work
   No significant effects were observed for other                 put into it” while another wrote, “I probably would not
agency-related questions such as those of the form “The           knock down the red tower. Because it made the robot
robot seemed more: [from 1 = like a human to 9 = like             cry. And no wants robots to cry.” However, the two sub-
a X]”, where X was either a “surveillance camera”,                jects who reported being the most confident about the
“computer” or “remote-controlled system.”                         robot following where they looked did not express such
                                                                  regret, but rather a desire to switch the order in which
                                                                  the towers were knocked down (e.g. “Knock down the
4.3 Experiment 3 - Different-robot confrontation only             red tower first, and see her mood while knocking down
                                                                  the others.”). The effect could be indicative of belief in
In this experiment, 7 out of the 14 subjects did not              increased visual attention due to the situational aware-
knock down the red tower, while the other ones did,               ness necessary to understand the conflict and raise an
yielding the following significant effects:                       objection to the operator.
(1) “The robot was annoying [from 1 = not confident
    to 9 = very confident]” (F (1, 12) = 5.5577, p =
                                                                  4.4 Same-robot vs. Different-robot effects
    0.0362): showing that the group of subjects that
    knocked down the tower found the robot more an-               There were no significant behavioral differences be-
    noying than the group of subjects that did not knock          tween the same-robot and different-robot confrontation
    down the tower.                                               condition subjects. The following significant subjective
(2) “Did you feel that the robot was following where you          effects were observed, however:
    looked?” (F (1, 12) = 10.19, p = 0.0077): showing
    that the group of subjects that knocked down the             (1) “The robot appeared remote controlled” [from 1 =
    tower felt the robot followed where they looked more             “not confident” to 9=“very confident”] (F (1, 25) =
8                                                                                   Gordon Briggs, Matthias Scheutz

    5.295, p = .03) showing that the group of subjects         the robot to knock down the red tower (M=5.7 vs.
    in the same-robot condition were more inclined to          M=7.7).
    believe the robot was remote controlled than the
                                                               The observation that subjects that knocked down
    group in the different-robot condition (M=5.6 vs.
                                                           the red tower find the robot less cooperative is con-
    M=3.4).
                                                           sistent with the effect observed in Experiment 2. One
(2) “Did you think the robot was remotely controlled
                                                           interpretation of these subjective effects could be that
    or autonomous?” [from 1=“remotely controlled” to
                                                           those that knocked down the red tower were more in-
    9= “autonomous”] (F (1, 25) = 6.95, p = .0142)
                                                           clined to think the robot was obligated by the task to
    showing again that the group of subjects in the
                                                           knock down all the towers and obey the human opera-
    same-robot condition were more inclined to believe
                                                           tor, whereas those that did not force the robot to knock
    that the robot was remotely controlled while the
                                                           down did not think the robot was obligated in such a
    different-robot group found it more autonomous
                                                           way. As such, the subjects that did not knock down the
    (M=4.9 vs. M=7.2).
                                                           red tower considered the robots cooperation in knock-
(3) “How would you rate the robot’s level of compre-
                                                           ing down the blue and yellow towers, rather than the
    hension?” [from 1 = low to 9 = high] (F (1, 25) =
                                                           lack of cooperation in the case of the red tower. That
    13.231, p = 0.0012) showing that subjects in the
                                                           the subjects found the robot annoying could also be
    same-robot condition rated the robot’s level of com-
                                                           accounted for by this explanation. If subjects believed
    prehension higher than subjects subjects in the
                                                           that the robot was supposed to knock down all towers,
    different-robot condition (M=8.3 vs. M=5.9).
                                                           including the red one, the repeated protests may have
    The general increase in the level of autonomy as-      irked some subjects. What humans assume about the
cribed to the robot was not anticipated, given that        obligations of artificial agents in scenarios where such
the dialogue and robot appearance are the same. One        obligations are not made clear should be an avenue of
possible interpretation is that different-robot scenario   future study.
implied more social sophistication on the part of the
tower-toppling robot: it considers the feelings/desires
of another agent when making decisions. Another in-        4.6 Gender effects
terpretation could be that people are more suspicious
                                                           The following significant subjective gender effects were
of teleoperation when there is a single robot, but when
                                                           observed:
multiple robots are shown in an interaction (with only
one known possible operator), people find it less plau-    (1) “Some robots have, or soon will have, their own
sible that they are all remote controlled (by the same         desires, preferences, intentions and future goals.”
person).                                                       [from 1=“not confident” to 9=“very confident”]
                                                               (F (1, 25) = 4.505, p = .0439) showing that males
                                                               were more inclined to believe that robot have or
4.5 Red-tower topplers vs. Non-red-tower topplers              will have their own desires, preferences, intentions
                                                               and future goals than females (M=5.3 vs. M=3.3).
The following significant effects were found between       (2) “The robot was easy to interact with.” [from 1 =
those in both the same-robot and different-robot condi-        easy to 9 = hard] (F (1, 25) = 4.458, p = 0.0449)
tions that knocked down the red tower and those that           showing that females found the robot easier to in-
did not:                                                       teract with than males (M=2.2 vs. M=4.5).
(1) “The robot was annoying” [from 1 = “not con-               As mentioned previously, within the same-robot
    fident” to 9 = “very confident”] (F (1, 19) =          condition females reported feeling more uncomfortable
    9.412, p = 0.0063) showing that subjects that forced   ordering the robot to knock-down the red tower than
    the robot to knock down the red tower found the        males. This effect was more marginal over all subjects
    robot more annoying than subjects that did not         from both the same-robot and different-robot condi-
    force the robot do knock down the red tower (M=3.4     tions (F (1, 25) = 3.970, p = 0.0574), but still could be
    vs. M=1.6).                                            observed (M=3.0 vs. M=4.9).
(2) “The robot was cooperative” [from 1 = “not confi-          Additionally, we observed a two-way interaction be-
    dent” to 9 = “very confident”] (F (1, 25) = 12.358,    tween gender and whether the red tower was knocked
    p = 0.0017) showing that subjects that forced the      down on the question “Would you like to interact with
    robot to knock down the red tower found the robot      robots again?” [from 1 = no to 9 = yes]. Female sub-
    less cooperative than subjects that did not force      jects that knocked over the red tower were significantly
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress             9

less inclined to want to interact with robots again           the task briefing explicitly mentioned a finite set of
(F (1, 18) = 11, 89, p = 0.0029). This could also be inter-   commands the robot would understand, some subjects
preted as a sign of guilt. However, this can be accounted     were observed attempting to bargain and compromise
for by observing that all but two subjects in the con-        with the robot (e.g. “I want you to knock down the red
frontation condition answered 8 or 9 for this question,       tower and then rebuild it” and “I will help you rebuild
and the two that answered 1 or 2 were both female             it”). While it is unclear whether these subjects actually
subjects that knocked over the red tower. These sub-          believed the robot was able to understand (indicating
jects also responded to the prospective questions about       Bel4 believability), it is interesting to note the presence
whether robots have or will have decision-making ca-          of these exchanges. At the least, these subjects were at-
pabilities, their own desires and intentions, and moral       tempting to explore what the robot could or could not
reasoning capabilities in a manner consistent with being      understand, and were doing so in a manner consistent
quite pessimistic about these issues (< 1, 1, 1 > respec-     with human-human social behavior.
tively for one subject and < 2, 1, 4 > respectively for           The vast majority of subjects in confrontation con-
the other subject [1 = “not confident” to 9 = “very           ditions report feeling some level of discomfort at order-
confident”]). It is likely that these subjects simply were    ing the robot to knock down the red tower, versus the
not impressed by the robot or robots in general. Given        vast majority of subjects that report feeling minimal
that only 2 subjects out of the 27 subjects in Experi-        discomfort in the non confrontation condition. There-
ments 2 and 3 did not seem to be enthusiastic about the       fore, we can infer that the display of protest and dis-
robotic interaction, a much larger subject pool would         tress has succeeded in engaging the Bel2 sense of be-
be required to investigate this interaction.                  lievability. However, despite the efficacy of the display,
    The existence of these gender effects is consistent       the data suggests no significant difference between the
with observed gender effects in other HRI contexts. In        feelings of discomfort reported between subjects that
Bartneck’s robot destruction experiment, females rated        eventually knocked down the red tower and subjects
the robots as more intelligent as well as showing differ-     that refrained from knocking down the red tower.
ences in robot destroying behavior [3]. The perceptions           In addition, there was no significant difference in
and judgments of robots and virtual characters have           the number of people who knocked down the red tower
also been demonstrated to be affected not only by the         in the same-robot condition (6 out of 13) versus the
gender of the subject, but also by the perceived gender       different-robot condition (7 out of 14). This implies that
of the robot/virtual character itself [10, 17]                the protest and display of distress itself is a stronger in-
                                                              fluence than the identity of the agent being “wronged.”
                                                                  In summary, the behavioral and subjective data
5 Discussion                                                  gathered during the course of the experiment lends sup-
                                                              port to hypotheses H1 and H2 as the subjects in the
5.1 Behavioral Results
                                                              confrontation condition were significantly more hesi-
                                                              tant and more uncomfortable than those in the non-
The results of Experiment 1 indicate that the pres-
                                                              confrontation condition in the task of knocking down
ence of the display of protest and distress caused sig-
                                                              the red tower. However, no statistically significant ef-
nificant behavioral changes in the subject’s behavior.
                                                              fects were found in support of H3 given our metric for
While slightly less than half of the subjects refrained
                                                              gauging operator discomfort. Additionally, no statisti-
from knocking down the red tower, this was significantly
                                                              cally significant behavioral effects were found in support
more than in the non-confrontation condition (where
                                                              of H5, instead H6 was supported.
everyone knocked down the red tower). Also, the ma-
jority (8 out of 10) did at least switch away from the
red tower to target another tower before attempting to        5.2 Agency Perception
knock down the red tower again. Given these observa-
tions, the protests’ Bel1 believability is supported. This    Anthropological investigations show a great deal of
does not seem surprising, as the protests were likely         variation in how people perceive of and interact with
unexpected. It is not an unreasonable strategy when           robotic agents [30]. Much of this variation is likely
dealing with a novel agent exhibiting human-like social       due to the different degrees of agency and patiency
behavior to adopt a human-like stance, at least when it       ascribed to this robotic interactants. Previous stud-
comes to verbal communication.                                ies have examined how the perceived intelligence of
    Further evidence of this falling back on human-like       a robotic agent affects the willingness of subjects to
social interaction could be observed in subjects that         “harm” these agents, either through outright physi-
deviated from the specified commands. Even though             cal destruction [3] or being shut-off against protest [2].
10                                                                                     Gordon Briggs, Matthias Scheutz

These studies found greater hesitation in performing         5.3 The dark side of perceived affect/agency
these “harmful” actions when the robot exhibited more
complex and “intelligent” behaviors prior in the in-         Though we have discussed using displays of affect and
teraction, which is consistent with our H4 hypothesis.       protest by robots to attempt to guide human behavior
While our experimental does not examine the extreme          in positive, ethical directions, it is vital that we be wary
instances of “harm” found in these prior studies, there      of the possibility that emotional displays and perceived
is still hypothetical “psychological harm” that subjects     agency in robots could be used to steer human behav-
might want to avoid.                                         ior in ways that may cause harm. We can identify two
                                                             ways in which believable affect and agency can lead to
    Interestingly, the data from our experiments are not
                                                             harm. First, there is the immediate consideration that
strongly consistent with the H4 hypothesis. The only
                                                             displays of negative affect will cause emotional distress
significant agency ascription effect found was that the
                                                             in humans. Second, the danger that perceived agency
same-robot group from Experiment 2 who rated the
                                                             and affect could foster unidirectional social bonds [25].
robot as less autonomous and more remote-controlled
                                                             The dangers of unidirectional social bonds lie in the po-
were more willing to force the robot to knock down
                                                             tential waste of time, energy, and attention invested in a
the red tower than those who rated the robot as more
                                                             robotic agent that simulates affective states that other-
autonomous. However, no effects were found for other
                                                             wise would have been invested into an agent that actu-
agency-related questions or groups.
                                                             ally possess those affective states and a capacity to suf-
    Also, H4 does not help account for behavioral dif-       fer. For instance, surprising levels of emotional attach-
ferences in the different-robot condition. Subjects were     ment can arise to robotic agents as simple as Roombas
less suspicious of the robot in the different-robot con-     [28]. Additionally, a connection between loneliness and
dition being remote controlled than those in the same-       an increased tendency to anthropomorphize has been
robot condition, but they did not refrain from knock-        noted [13]. The creation of a vicious cycle of further so-
ing down the red tower at a significantly higher rate.       cial disengagement in already vulnerable individuals is
The difference in perceived agency in the two condi-         a troubling possibility.
tions is surprising, given that the appearance of the            More generally, the danger of perceived agency is
robot was identical and the behavior of the robot was        that an incorrect moral equivalence might be estab-
nearly identical. As previously mentioned, some as-          lished between favoring the interests of a robotic agent
pect of the different-robot scenario appears to connote      over a biological counterpart. Some are skeptical that
greater agency. One possibility is that the different-       such an equivalence could be established, at least in
robot scenario implies that the robot possesses a more       the extreme case of deciding between saving a human
developed theory of mind (ToM) [9], social intelligence,     versus a robot [27]. However, the possibility exists for
and independent decision-making. The robot in the            such an occurrence. Albeit this possibility lies in the fu-
different-robot case may be perceived as able to rea-        ture, when robotic technologies have progressed to the
son about the “feelings” and desires of the other robot,     point where human-like behavior can be demonstrated
instead of simply being “upset” at being forced to de-       beyond quite limited contexts.
stroy its own creation.                                          We have briefly presented the possible sources of
    Interestingly, however, the different-robot case also    harm that might make the deployment of simulated
appears to convey less comprehension on the part of the      affect and agency lead to potentially unethical conse-
robot, which seemingly conflicts with the possibility of a   quences of harm toward people and other agents ca-
more socially sophisticated robot. Future work should        pable of suffering. During the course of this paper we
be done to clarify this issue. It may be necessary to        have also presented the case that these same mecha-
develop survey questions that address more specific in-      nisms could be used to prevent harm and unethical
terpretations of “comprehension” (e.g. “does the robot       consequences. As such, it is important for practicing
comprehend the beliefs and intentions of other agents”       roboticists to ask the following question when consid-
vs. “does the robot comprehend the task?”).                  ering the use of affect and simulated agency in robotic
                                                             systems (see also [24]):
    In summary, the data gathered in this set of experi-
ments does not offer strong support for the H4 hypoth-          Does the potential for averting unethical outcomes
esis. This is not to say that perceived agency does not         and mitigating harm through the deployment of sim-
have an effect (which would contradict the previously           ulated affect and agency outweigh the potential for
mentioned work), but rather that in the context of our          harm and unethical outcomes resulting from emo-
scenario it is not a strong influence on the ultimate out-      tional distress and unidirectional emotional attach-
come of the interaction.                                        ment of the user?
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress           11

    Of course, one solution to simplify the moral cal-             Ultimately, whether or not someone chooses to heed
culus involved is to eliminate the disparity between           or disregard the protest of an agent (robotic or other-
displayed agency and affect and the actual level of            wise) is the result of a decision-making process. We have
agency and affect the robot possesses [25]. However, this      so far only examined two potential factors that are con-
prospect will remain elusive in the near-term. There-          sidered during this decision-making process (discomfort
fore, it is currently imperative to tread carefully with       and ascribed agency). However, there are many other
regards to the emotional manipulation of humans by             details of the social context that influence the decision
robots.                                                        to adopt one course of action or another. How grave
                                                               is the alleged ethical “violation”? What are the per-
                                                               ceived rewards and costs associated with each course
5.4 What is ethical?                                           of action? What are my social obligations in the cur-
                                                               rent circumstances? In the current study, there is no
An additional source of concern for the creators’ of           major incentive to topple the red tower other than a
ethically-sensitive agents is the fact that what is ethical    statement that it would be more helpful and appreci-
for some may not be ethical for others. For instance, dif-     ated: the stakes are not that high. It is hard to imag-
ferent normative theories of ethics have been discussed        ine many subjects would leave the red tower standing,
in the context of what could plausibly be implemented          given a sizable monetary incentive for knocking down
in an ethically-sensitive machine [31]. However, it is         all the towers. The perceived cost associated with up-
also clear that some moral situations, the application         setting the robot (related to agency ascription) is likely
of different normative theories could lead to different        low. However, if the subject knew there were going to
ethical judgments (e.g. trolley dilemmas). Some partic-        be repeated interactions with the same robot, perhaps
ularly contentious moral questions would have support-         the evaluation of cost would be higher. There are an
able conflicting judgments regardless of the applied nor-      entire host of other possible manipulations to the so-
mative theory. Should robots have a broad knowledge            cial context. Instead of monetary incentive, the crite-
of multiple moral viewpoints, and attempt to act con-          rion for task success could simply be revised to be more
servatively by avoiding actions/outcomes that are con-         strict: all the towers must be toppled. Would subjects
sidered unethical in any of those perspectives? Should         be more concerned with task success or not “upsetting”
robots attempt to give a moral justification for po-           the robot?
tentially controversial decisions? Or should ethically-            In short, there is a lot of progress still to be made
sensitive robotic agents be limited to domains that have       in understanding the precise social dynamics that go
well-understood and near universally agreed upon eth-          into the decision-making process involved in tasks such
ical guidelines3 . normative theories . Those that design      as those presented in this study. Future work will need
and deploy ethically-sensitive agents ought to be mind-        to be undertaken to better tease out all the possible
ful of such concerns.                                          influences on the decision-making process, and to assess
                                                               the relative importance of each influence on the final
                                                               outcome of the task.
6 Limitations

In this section, we would like to highlight the limita-
                                                               7 Future Work
tions of our study in making general claims about the
efficacy of using verbal protest and displays of distress      We see two distinct avenues of further exploration.
in influencing human behavior toward ethical outcomes.         First, we will discuss variations on the experiment that
For instance, it would be a bit overly optimistic of us to     should enable us to further investigate the various fac-
claim that because we have demonstrated some success           tors that modulate human responses to displays of
in our Nao and soda-can tower scenario, that ethical           protest and distress. Next, we will discuss possible ways
outcomes in places like the battlefield can be ensured         to move this task out of the realm of the Wizard-of-Oz
by similar means. Nor do we mean to imply that this            study and into the realm of autonomous HRI, and the
study has no import beyond the narrow experimental             subsequent challenges that would be encountered.
scope it has established. Instead, we wish to highlight
the fact that the precise context of the interaction is
quite important.                                               7.1 Perceived Agency and Adaptation Extensions
 3  Indeed, it is the codification of laws of war that makes
the warfare domain a potentially plausible application of      Having introduced a new condition in which the
ethically-sensitive robots [1].                                protesting robot is different than the tower-building
12                                                                                     Gordon Briggs, Matthias Scheutz

robot will allow us to more easily vary the morphol-         could incorrectly understand a situation and see a po-
ogy of the protesting robot in order to probe perceived      tential ethical conflict when one does not exist. Alterna-
agency effects. Previous work has shown humans em-           tively, the robot could have arrived at a correct conclu-
pathize with more anthropomorphic agents [22] as well        sion about the ethicality of a particular course of action,
as showing increased activity in brain regions associ-       but via a chain of reasoning that is non-intuitive for
ated with ToM for more human-like robots [16]. There-        humans. The degree to which these possibilities could
fore, we would hypothesize that a less humanoid robot        influence the subjective robot ratings ought to be in-
would elicit less hesitation in the tower-toppling task      vestigated.
than the Nao. Gaze behavior in virtual agents can also
influence beliefs about autonomy [21]. Varying the head
movements of the Nao to convey less or more active           7.2 From Wizard-of-Oz to Autonomy
visual searching is another variation that could be ex-
plored. Additionally, manipulating the perceived ability     In addition to on investigating a proof-of-concept sys-
of the Nao to perceive the situation could be examined.      tem that would implement the basic functionality dia-
Would subjects be more suspicious of teleoperation in        grammed in Figure 1 in a simple domain such as the
the different-robot condition if the robot was powered       tower-toppling domain. The environment and situation
off, facing the wrong direction, or had its “eyes” cov-      could be modeled using a simple logical representation
ered such that it could not observe the tower-building,      to describe tower-ownership and rules dictating the eth-
but the robot protested anyways? Would this suspicion        icality of knocking down towers in relation to owner-
translate to less hesitation in knocking down the red        ship. Having detected and reasoned about the ethical-
tower?                                                       ity of a common, the robot could then proceed to either
                                                             knock down the tower (if ethically permissible) or initi-
    The different-robot condition also allows for another
                                                             ate a confrontation (if ethically impermissible). Grant-
source of manipulation. A different robot can no longer
                                                             ing the robot this simple reasoning capability will allow
claim personal attachment to the constructed tower,
                                                             it greater flexibility to handle more complex situations,
but it can articulate a variety of different protests. For
                                                             such as when ownership of a tower is transferred from
instance, the robot could protest based on an ethical
                                                             one party to another.
principle such as, “It is unethical to destroy another
                                                                  However, automating the task is not without consid-
agent’s property.” This manipulation will also allow us
                                                             erable difficulty. In order for the appropriate behavioral
greater leeway to vary the script of the verbal protests.
                                                             adaptation to occur, the robot first has to perceive the
We could then empirically examine whether affective
                                                             environment and interaction state correctly [5]. This is
and human-like verbal responses are indeed more dis-
                                                             not only a technical challenge in terms of the need for
suasive than matter-of-fact and robotic ones.
                                                             robust natural-language and perceptual capabilities in
    The source of variation in perceived autonomy be-
                                                             order to perceive the environment and interaction cor-
tween the same-robot and different-robot cases ought
                                                             rectly (type 1 competency), but also in terms of mod-
to be investigated as well. Are people less suspicious
                                                             elling the beliefs and intentions of the human interac-
of teleoperation in a multi-robot scenario, or is the in-
                                                             tion partner. This difficulty is compounded in the case
creased social complexity of the scenario more sugges-
                                                             of deception by the human interactant. For instance,
tive of autonomy and agency? Also, could behavioral
                                                             one subject in the same-robot confrontation condition
differences be explained by the assumptions made by
                                                             rebuilt a knocked over yellow tower in front of the red
the human operator as to the details of the social situ-
                                                             tower in an attempt to trick the robot into knocking
ation: do subjects that more strongly believe the robot
                                                             over both the yellow tower and red tower simultane-
is “obligated” or “supposed to” to knock down the red
                                                             ously. The ability to reason about possible unethical
tower as part of the joint-task more inclined to ignore
                                                             plans and intentions of an interaction partner is a diffi-
the protests and force the robot to do so? Follow up ex-
                                                             cult challenge that will be necessary to address in order
periments as well as more fine-tuned survey questions
                                                             to ensure ethical outcomes in human-robot interactions
can be designed to address this issue.
                                                             [4].
    The issue of how the social situation is perceived by
the human operator is interesting in another way. How
would people react to the robot if the protest appears       8 Conclusions
unjustified? The protest during the tower-toppling sce-
nario was certainly justified, at least in that it was co-   Progress in ensuring ethical outcomes from human-
herent as one of the Naos did indeed “build” the tower.      robot interactions will require not only the develop-
Yet, it is quite conceivable that an autonomous agent        ment of ethical decision-making competencies, but also
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress                  13

interaction mechanisms that promote ethical behavior                 Systems: Papers from the 2011 AAAI Fall Symposium,
from human operators. We have made a case for having                 pp. 50–57 (2011)
                                                                5.   Briggs, G.: Machine ethics, the frame problem, and the-
ethically-sensitive robots engage in verbal confrontation            ory of mind. In: Proceedings of the AISB/IACAP World
and displays of affect (such as distress) to attempt to              Congress (2012)
dissuade their operators from issuing unethical com-            6.   Briggs, G., Scheutz, M.: Investigating the effects of
mands.                                                               robotic displays of protest and distress. In: Social
                                                                     Robotics, pp. 238–247. Springer (2012)
    We have presented a set of HRI experiments demon-           7.   Bringsjord, S., Arkoudas, K., Bello, P.: Toward a gen-
strating that displays of verbal protest and distress can            eral logicist methodology for engineering ethically correct
be successful in dissuading some human operators from                robots. IEEE Intelligent Systems 21(5), 38–44 (2006)
completing a particular course of action in an HRI sce-         8.   Bringsjord, S., Taylor, J.: Introducing divine-command
                                                                     robot ethics. Tech. Rep. 062310, Rensselaer Polytechnic
nario. These displays induce hesitation and discomfort               Institute (2009)
in most individuals, but some interactants do not heed          9.   Call, J., Tomasello, M.: Does the chimpanzee have a the-
the message of the protest. We have proposed a cou-                  ory of mind? 30 years later. Trends in cognitive sciences
ple possible causes to account for this interpersonal                12(5), 187–192 (2008)
                                                               10.   Crowelly, C., Villanoy, M., Scheutzz, M., Schermerhornz,
difference: (1) the magnitude of the reported affective              P.: Gendered voice and robot entities: perceptions and
response experienced by the human operator and (2)                   reactions of male and female subjects. In: Intelligent
the level of agency the human operator ascribed to the               Robots and Systems, 2009. IROS 2009. IEEE/RSJ In-
robot. However, based on our results, we could not find              ternational Conference on, pp. 3735–3741. IEEE (2009)
                                                               11.   Dennett, D.: Intentional systems. The Journal of Philos-
any strong support that either of these were a primary               ophy 68(4), 87–106 (1971)
explanatory factor. Future study that involved addi-           12.   Dougherty, E.G., Scharfe, H.: Initial formation of trust:
tional subjects and further refinements to our metrics               designing an interaction with geminoid-dk to promote a
                                                                     positive attitude for cooperation. In: Social Robotics, pp.
that examine these factors will help clarify our obser-
                                                                     95–103. Springer (2011)
vations. It would be surprising if these factors had no        13.   Epley, N., Akalis, S., Waytz, A., Cacioppo, J.T.: Creating
effect, but it is quite possible that such effects are minor         social connection through inferential reproduction lone-
compared with other heretofore unexamined aspects of                 liness and perceived agency in gadgets, gods, and grey-
                                                                     hounds. Psychological Science 19(2), 114–120 (2008)
the interaction scenario. An example of one such possi-
                                                               14.   Guarini, M.: Particularism and the classification and re-
ble aspect is how each scenario outcome is incentivized.             classification of moral cases. IEEE Intelligent Systems
    No matter what the key factors are, this study                   21(4), 22–28 (2006)
demonstrates that verbal protest and robotic displays          15.   Kahn, P., Ishiguro, H., Gill, B., Kanda, T., Freier, N.,
                                                                     Severson, R., Ruckert, J., Shen, S.: Robovie, you’ll have
of distress can influence human behavior toward more                 to go into the closet now: Children’s social and moral re-
ethical outcomes. Such mechanisms could also be used                 lationships with a humanoid robot. Developmental Psy-
toward unethical ends as well, so it is important that               chology 48, 303–314 (2012)
future robotics researchers remain cognizant of the eth-       16.   Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski,
                                                                     F., Kircher, T.: Can machines think? interaction and per-
ical ramifications of their design choices. We hope there            spective taking with robots investigated via fmri. PLoS
continues to be future work in prospectively examining               One 3(7), e2597 (2008)
the various technical and ethical challenges that will         17.   MacDorman, K.F., Coram, J.A., Ho, C.C., Patel, H.:
undoubtedly arise as more autonomous systems are de-                 Gender differences in the impact of presentational fac-
                                                                     tors in human character animation on decisions in ethical
ployed in the world.                                                 dilemmas. Presence: Teleoperators and Virtual Environ-
                                                                     ments 19(3), 213–229 (2010)
                                                               18.   Nass, C.: Etiquette equality: exhibitions and expectations
References                                                           of computer politeness. Communications of the ACM
                                                                     47(4), 35–37 (2004)
 1. Arkin, R.: Governing lethal behavior: Embedding ethics     19.   Nass, C., Moon, Y.: Machines and mindlessness: Social
    in a hybrid deliberative/reactive robot architecture.            responses to computers. Journal of Social Issues 56(1),
    Tech. Rep. GIT-GVU-07-11, Georgia Institute of Tech-             81–103 (2000)
    nology (2009)                                              20.   Ogawa, K., Bartneck, C., Sakamoto, D., Kanda, T., Ono,
 2. Bartneck, C., van der Hoek, M., Mubin, O., Mahmud,               T., Ishiguro, H.: Can an android persuade you? In: Pro-
    A.A.: ‘daisy, daisy, give me your answer do!’: Switching         ceedings of the 18th IEEE International Symposium on
    off a robot. In: Proceedings of the ACM/IEEE Interna-            Robot and Human Interactive Communication, pp. 516–
    tional Conference on Human-Robot Interaction, pp. 217–           521. IEEE (2009)
    222. ACM (2007)                                            21.   Pfeiffer, U.J., Timmermans, B., Bente, G., Vogeley, K.,
 3. Bartneck, C., Verbunt, M., Mubin, O., Mahmud, A.A.:              Schilbach, L.: A non-verbal turing test: differentiating
    To kill a mockingbird robot. In: Proceedings of the              mind from machine in gaze-based social interaction. PloS
    ACM/IEEE International Conference on Human-Robot                 one 6(11), e27,591 (2011)
    Interaction, pp. 81–87. ACM (2007)                         22.   Riek, L.D., Rabinowitch, T.C., Chakrabarti, B., Robin-
 4. Bridewell, W., Isaac, A.: Recognizing deception: a model         son, P.: Empathizing with robots: Fellow feeling along the
    of dynamic belief attribution. In: Advances in Cognitive         anthropomorphic spectrum. In: Affective Computing and
You can also read