NEGLIGENCE AND AI'S HUMAN USERS

Page created by Corey Craig
 
CONTINUE READING
NEGLIGENCE AND AI’S HUMAN USERS
                                    Andrew D. Selbst*
          Negligence law is often asked to adapt to new technologies. So it is with artificial
intelligence (AI). But AI is different. Drawing on examples in medicine, financial advice,
data security, and driving in semi-autonomous vehicles, this Article argues that AI poses
serious challenges for negligence law. By inserting a layer of inscrutable, unintuitive, and
statistically-derived code in between a human decisionmaker and the consequences of that
decision, AI disrupts our typical understanding of responsibility for choices gone wrong.
The Article argues that AI’s unique nature introduces four complications into negligence:
1) unforeseeability of specific errors that AI will make; 2) capacity limitations when
humans interact with AI; 3) introducing AI-specific software vulnerabilities into decisions
not previously mediated by software; and 4) distributional concerns based on AI’s
statistical nature and potential for bias.
          Tort scholars have mostly overlooked these challenges. This is understandable
because they have been focused on autonomous robots, especially autonomous vehicles,
which can easily kill, maim, or injure people. But this focus has neglected to consider the
full range of what AI is. Outside of robots, AI technologies are not autonomous. Rather,
they are primarily decision-assistance tools that aim to improve on the inefficiency,
arbitrariness, and bias of human decisions. By focusing on a technology that eliminates
users, tort scholars have concerned themselves with product liability and innovation, and as
a result, have missed the implications for negligence law, the governing regime when harm
comes from users of AI.
          The Article also situates these observations in broader themes of negligence law:
the relationship between bounded rationality and foreseeability, the need to update
reasonableness conceptions based on new technology, and the difficulties of merging
statistical facts with individual determinations, such as fault. This analysis suggests that
though there might be a way to create systems of regulatory support to allow negligence law
to operate as intended, an approach to oversight that it not based in individual fault is
likely to be a more fruitful approach.

    *  Postdoctoral Scholar, Data & Society Research Institute; Visiting Fellow, Yale
Information Society Project. For extraordinarily helpful insights and comments on earlier
drafts, I would like to thank Jack Balkin, Rabia Belt, Kiel Brennan-Marquez, Ryan Calo,
Rebecca Crootof, James Grimmelmann, Jill Horwitz, Sonia Katyal, Christina Mulligan,
Nicholson Price, Rebecca Wexler, and participants at The Yale Information Society
Project Ideas Lunch and Fellows Writing Workshop, as well as the 2018 Privacy Law
Scholars’ Conference.
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                                   3/11/19

2                                   Boston University Law Review                             [Vol. 100:XX

Introduction ............................................................................................... 2
I. Torts and the Creation of AI ................................................................... 7
     A. Autonomous Vehicles, Product Liability, and Innovation ........................... 7
     B. Sidelining Drivers..................................................................................... 12
II. How AI Challenges Negligence............................................................ 15
     A. Unforeseeability of AI Errors.................................................................... 16
          1. AI Errors May Be Unforeseeable...................................................... 18
          2. A New Kind of Foreseeability........................................................... 26
     B. Human Limitations Interacting with Computers ...................................... 30
     C. AI-Specific Software Vulnerabilities ......................................................... 34
     D. Unevenly Distributed Injuries .................................................................. 36
III. Lessons for Negligence Law ................................................................ 43
     A. Negligence, Bounded Rationality, and AI................................................. 43
     B. Updates to Reasonableness Come with Familiarity and Access ................. 48
     C. Statistical Facts and Individual Responsibility .......................................... 52
Conclusion ............................................................................................... 57

                                               INTRODUCTION

         As with any new technology, widespread adoption of artificial
intelligence (AI) will lead to injuries. Medical AI will recommend improper
treatment, robo-advisers will wipe out someone’s bank account, and
autonomous robots will kill or maim. And just as with any new technology,
negligence law will be called on to adapt and respond to the new threat.1
But AI is different. With most new technologies, we gain familiarity with the
technology over time, eventually creating a sense of what constitutes
reasonable care, a collective intuition on which negligence law can rely as it
adapts. AI, however, poses challenges for negligence law that may delay or
prevent this adaptation.
         The large and growing body of scholarship on AI and tort law has

    1 See Mark F. Grady, Why Are People Negligent? Technology, Nondurable Precautions, and the

Medical Malpractice Explosion, 82 NW. U. L. REV. 293 (1988) (“Negligence law is
fundamentally a creature of technology; really, it is the common law's response to
technology.”)
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                               3/11/19

2020]                 NEGLIGENCE AND AI’S HUMAN USERS                                      3

mostly overlooked negligence.2 There is a good reason for this. Tort law is
most centrally concerned with physical injury, and prior research has
focused on robots. Robots are essentially embodied AI3—a large, heavy,
moving, form of embodied AI that can cause severe physical harm.
Moreover, one of the most exciting and doctrinally interesting types of robot
in development is the autonomous vehicle, which, if it becomes
commonplace, will save countless lives. Prior research has therefore focused
heavily on autonomous vehicles, with two central themes. The first is that by
automating the driving task, liability for car accidents will move away from
negligence on the driver’s part toward product liability for the
manufacturer. The second is a concern that the prospect of tort damages
may impede the innovation needed to get autonomous vehicles on the road.
Both of these concerns relate specifically to autonomous vehicles and
examine product liability rather than negligence.
        The story of AI does not stop there, however. What this prior
research has failed to appreciate is that autonomous robots are a narrow
subset of AI technologies. More common is what I refer to as “decision-
assistance” AI: technology that operates by making a recommendation to a
user. These technologies can be seen as an entirely different category.
Instead of seeking to replicate human capabilities, they seek to go beyond
human capabilities by recognizing and modeling patterns too complex for
humans to process and making decisions in ways humans would not
recognize.4 And instead of operating with the push of a button, they require
human decisionmakers perpetually in the loop. Despite also being based on
machine learning techniques, these user-centered technologies differ in
fundamental ways that change the legal analysis.
        Recognizing that decision-assistance technologies require users to
have effect draws the conversation back from product liability to negligence,

    2  One notable exception is William D. Smart, Cindy M. Grimm & Woodrow Hartzog,
An Education Theory of Fault for Autonomous Systems, (draft available at
http://www.werobot2017.com/wp-content/uploads/2017/03/Smart-Grimm-Hartzog-
Education-We-Robot.pdf) (arguing for a new theory of fault for autonomous systems that
spans current notions of negligence and product liability).
     3 See Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 CALIF. L. REV. 513, 533–45

(2015).
     4 Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87

FORDHAM L. REV. 1085, 1089–99 (2018).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                       3/11/19

4                               Boston University Law Review                    [Vol. 100:XX

and that is where this Article makes its contribution. The Article identifies
four new challenges to negligence law posed by AI decision-assistance
technologies. Filling this analytical gap is urgent because unlike autonomous
vehicles, decision assistance AI is rapidly expanding to every facet of our
lives. Some of the uses are not regulated by tort law, such as employment,5
lending,6 retail,7 policing,8 and agriculture,9 but other uses occur where
negligence law (or a negligence analogue) plays a role, such as medicine,
finance, and data security.10 Even in the driving context, no fully
autonomous vehicle is on the market, and when drivers of the semi-
autonomous vehicles that do exist injure people, the injury will be subject to
negligence analysis.
        There two other reasons to go beyond autonomous vehicles and
consider the implications of AI as a tool to be used by a person. First, most
of the injuries by autonomous vehicles will be covered by insurance, making
the liability issue more of a professional curiosity than a practical concern, at
least from the perspective of injured parties seeking to be made whole.11

    5  Rudina Seseri, How AI Is Changing the Game for Recruiting, FORBES (Jan. 28, 2018),
https://www.forbes.com/sites/valleyvoices/2018/01/29/how-ai-is-changing-the-game-
for-recruiting/
     6 Breana Patel, What Role Can Machine Learning and AI Play in Banking and Lending?,

FORBES                           (Oct.                           5,                          2018),
https://www.forbes.com/sites/forbesfinancecouncil/2018/10/05/what-role-can-machine-
learning-and-ai-play-in-banking-and-lending/.
     7 See generally ALEXANDRA MATEESCU & MADELEINE CLARE ELISH, AI IN CONTEXT:

THE LABOR OF INTEGRATING NEW TECHNOLOGIES (forthcoming 2019)
     8 Matt Burgess, AI Is Invading UK Policing, But There’s Little Proof It’s Useful, WIRED (Sept.

21, 2018), https://www.wired.co.uk/article/police-artificial-intelligence-rusi-report.
     9 MATEESCU & ELISH, supra note 7, at __.
     10 See infra Part II.A.
     11 See Kyle D. Logue, The Deterrence Case for Comprehensive Automaker Enterprise Liability

(working paper), at *3-4 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3279812
(arguing that a comprehensive automaker enterprise liability system should replace tort
law’s “ex post auto-crash deterrence” function once autonomous vehicles are on the road).
Mark Geistfeld disagrees, arguing that insurance cannot solve tort liability problems. Mark
A. Geistfeld, A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and
Federal Safety Regulation, 105 CAL. L. REV. 1611, 1618 (2017) (“[W]hen there is a
fundamental disagreement about the underlying liability rules, the uncertainty is systemic
and cannot be eliminated by the pooling of individual risks within an insurance scheme.”);
see    also     Bryan      Choi,  Crashworthy     Code,     __      WASH.        L.     REV.   __,
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                     3/11/19

2020]                  NEGLIGENCE AND AI’S HUMAN USERS                                           5

Second, focusing on AI as an agent of harm with no user perpetuates
popular misconceptions that treat AI as more capable of self-determination
than as a tool to aid decisionmaking.12 I agree with scholars such as Jack
Balkin and Ryan Calo that treating today’s AI as something with agency is
not useful or edifying.13 It evokes visions of robots and the AI apocalypse
more appropriate to science fiction than legal journals, distracting from the
real law and policy problems of today’s technology.14
        This Article proceeds in three parts. Part I reviews the research on
torts and AI so far. It reveals two general themes: that autonomous vehicles
will drive liability regimes away from negligence toward product liability,
and that the uncertainty of tort damages might interfere with the innovation
necessary to get autonomous vehicles on the road. The Part describes the
major debates in the research so far, and explains why AI’s impact on
negligence law has not yet been addressed.
        Part II turns to negligence. First, it explains why the AI in
autonomous vehicles is a special case of AI that can be easily overseen by
humans. Then, drawing on examples such as medical AI, robo-advisers, AI
for data security, and safety drivers in semi-autonomous vehicles, it argues
that AI creates four challenges for negligence: 1) Epistemic: AI often aims to

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3230829 (calling the invocation of
insurance in these debates a deus ex machina). Geistfeld’s concern is whether uncertainty in
tort liability will prevent innovation, and his observation may be correct because it relates
to the total costs that manufacturers pay. But from an individual plaintiff’s perspective, no
matter what the insurance premium is, the very idea of insurance is to balance out the luck
of being the one involved in an accident against the many who are not. Thus, from the
plaintiff’s perspective, with insurance she will get paid by someone, rendering the liability
issue less important.
     12 See Oscar Schwartz, 'The Discourse Is Unhinged': How the Media Gets AI Alarmingly Wrong,

THE                    GUARDIAN                   (July                  25,               2018),
https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-
media-bots-wrong; see also
     13 Jack M. Balkin, The Three Laws of Robotics in the Age of Big Data, 78 OHIO ST. L.J. 1217,

1223 (2017) (discussing the “homunculus fallacy”: “the belief that there is a little person
inside the program who is making it work—who has good intentions or bad intentions, and
who makes the program do good or bad things”); Ryan Calo, Artificial Intelligence Policy: A
Primer and Roadmap, 51 U.C. DAVIS L. REV 399, 430–31 (2017) (discussing mental models of
AI).
     14 See Calo, supra note 13, at 431–35 (critiquing concerns about the AI apocalypse).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                      3/11/19

6                           Boston University Law Review         [Vol. 100:XX

go beyond human comprehension, making error detection impossible in the
moment; 2) Capacity-based: The average person will have limited physical
and mental abilities that will be exposed by interaction with machines, with
potentially harmful results; 3) Adversarial: AI introduces operational
security concerns into decisions that were not previously mediated by
software. 4) Distributional: By substituting statistical reasoning for
individual, and by changing its output based on prospective plaintiffs, AI
creates new openings for bias to enter individual tort cases. The part ends
with a note on why broad concerns about foreseeability due to AI’s
unpredictability are misplaced.
         Part III situates these observations about AI in the larger structure
and operation of negligence law. Foreseeability doctrine is how negligence
grapples with bounded rationality. Because AI tries to do the same thing,
the new kinds of errors end up with a version of foreseeability that is the rule
rather than the exception. The capacity-based and adversarial problems
look like more familiar problems of new technology, where time and
experience will help elucidate a new reasonableness standard. Even so, trade
secrecy and the non-decomposability of AI will likely prevent standards
from developing fast enough without outside intervention. And finally,
though the distributional problem is truly new to tort law, it is representative
of a larger difficulty of negotiating statistical facts in an area of individual
responsibility. This discussion will draw on prior work in algorithmic
discrimination, where this is a familiar problem, demonstrating that any
attempt to solve AI problems with individual fault may be difficult.
         Negligence law is fundamentally concerned with how people make
decisions: whether they take exercise proper care or create unreasonable
risks. A tight nexus between human decisions and outcomes is therefore
fundamental to its operation. Users of AI, however seek to replace or
augment human decision processes with inscrutable, unintuitive, and
statistically-derived, and often secret code. The technology is designed to
interfere with human decisions; it is sold on the premise that human
decisionmaking is not to be trusted. Because AI targets human
decisionmaking directly, negligence law appears particularly unsuited to
addressing its potential harms, in a way that is not shared by earlier
technologies. Law will need other methods to address the injuries that will
inevitably occur.
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                   3/11/19

2020]                  NEGLIGENCE AND AI’S HUMAN USERS                                         7

                           I. TORTS AND THE CREATION OF AI

        A large and growing body of scholarship is being written on AI and
tort law. Most of this work is about autonomous robots, especially vehicles.
This makes sense. Tort law is most centrally concerned with physical injury,
and as robots are large, heavy, moving, physical objects, they have the
capacity to cause severe physical harm. This scholarship has two central
themes. The first is that due to automation, liability for injuries will move
away from negligence toward product liability. The second is a concern that
the prospect of tort damages may hamper` the innovation needed to get
autonomous vehicles on the road. Both of these concerns relate specifically
to autonomous vehicles, focusing on AI’s creation, rather than its use. This
Part briefly reviews the tort and AI literature.

                  A. Autonomous Vehicles, Product Liability, and Innovation

        Human error is responsible for the vast majority of car accidents.15
As a result, the ability of autonomous vehicles to separate humans from
driving responsibilities is an extremely important achievement. This safety
enhancement is seen by tort scholars as the primary purpose of automating
driving. As a result, there is broad consensus that autonomous vehicles are
likely to change the liability calculus by shifting liability to the
manufacturers.16 For some scholars, this is the central to the argument, and
for others writing later, is it just a premise.17
        Given this consensus, one focus of scholarship is on the interesting

     15 NAT’L HIGHWAY TRAFFIC S AFETY ADMIN., FEDERAL AUTOMATED VEHICLES

POLICY: ACCELERATING THE NEXT REVOLUTION IN ROADWAY SAFETY 5 (2016),
http://www.safetyresearch.net/Library/Federal_Automated_Vehicles_Policy.pdf
(“[Ninety-four] percent of crashes can be tied to a human choice or error.”)
     16 DOROTHY J. GLANCY ET AL., TRANSPORTATION RESEARCH BOARD, A LOOK AT

THE      LEGAL      ENVIRONMENT           FOR      DRIVERLESS      VEHICLES     35     (2016),
http://onlinepubs.trb.org/onlinepubs/nchrp/nchrp_lrd_069.pdf; Geistfeld, supra note 11,
at 1619.
     17 Bryant Walker Smith, Automated Driving and Product Liability, 2017 MICH. ST. L. REV.

1, 6 (explaining that while it is “pure fantasy,” assuming “100% automated driving across
all vehicles and all trips” is useful for his argument); Curtis E.A. Karnow, The Application of
Traditional Tort Theory to Embodied Machine Intelligence, in ROBOT LAW 51, 57-58 (Ryan Calo,
A. Michael Froomkin & Ian Kerr, eds. 2016).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                       3/11/19

8                               Boston University Law Review                    [Vol. 100:XX

product liability question: how to decide whether certain accidents amount
to defects. Briefly reviewing product liability, manufacturers and sellers of
products can be found liable for one of three kinds of product defects:
manufacturing defects, design defects, and failures to warn. Manufacturing
defects are errors in production—instances where the product differs from
the blueprint.18 The exploding soda or champagne bottle is the canonical
example.19 Manufacturing defects lead to strict liability for the
manufacturer. Design defects are instead judged by one of two balancing
tests—the risk-utility test recommended by the Restatement (Third) of
Torts20 or the consumer expectations test still used in some jurisdictions.21
The risk-utility test holds a product defective when a “reasonable alternative
design” exists, the omission of which “renders the product not reasonably
safe.”22 Functionally, the test looks at a design change that a plaintiff can
point to and asks if the costs outweigh the benefits.23 The consumer
expectations test defines a defect as a condition that is “dangerous to an
extent beyond that which would be contemplated by the ordinary
consumer.”24 Both aim to address the tradeoff between safety and the cost
necessary to find every possible imperfection, and the differences between
them may be overstated.25 Failures to warn employ a similar risk-utility test
that asks if the missing warning renders the product unreasonably unsafe.26
         One of the questions for autonomous vehicles is how to classify a
defect. A consequence of classifying an error that leads to a car crash as a
        27

manufacturing defect, design defect, or failure to warn is stark: either strict
liability for the former or a reasonableness or cost-benefit analysis for the

    18  Restatement (Third) of Torts: Prod. Liab. § 2 [hereinafter Restatement, Product
Liability].
    19 Id. § 2 cmt c.
    20 Id. § 2.
    21 Restatement (Second) of Torts § 402a.
    22 Restatement, Product Liability § 2.
    23 See id. § 2 cmt. d (likening the cost-benefit analysis to breach in negligence); David G.

Owen, Design Defects, 73 MO. L. REV. 291, 315 (2008); Stephen G. Gilles, The Invisible Hand
Formula, 80 Va. L. Rev. 1015, 1047 (1994).
    24 Restatement (Second) of Torts § 402a cmt i.
    25 See MARK A. GEISTFELD, PRODUCT LIABILITY LAW 69–116 (2012).
    26 Restatement, Product Liability § 2.
    27 F. Patrick Hubbard, “Sophisticated Robots”: Balancing Liability, Regulation, and Innovation,

66 FLA. L. REV. 1803, 1854 (2014).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                         3/11/19

2020]                    NEGLIGENCE AND AI’S HUMAN USERS                                             9

latter. Most scholars argue that autonomous vehicle crashes should be
analyzed as design defects. Some attribute this to the incompleteness of a
self-learning AI when it comes off the line. As Curtis Karnow argues,
autonomous vehicles conform to design at the time of delivery, and the
particular changes that result from self-learning from the environment are
unforeseeable to the manufacturer, rendering strict liability inappropriate.28
Others note that software in general copies with fidelity, so bugs should
always be considered design defects.29 Manufacturing defects still apply to
the hardware in the car, but that does not change with the introduction of
autonomous vehicles specifically.30 Warning defects are not central to the
discussion, because most of the analysis removes the driver—the very person
who is supposed to receive the warning.
         A second concern in this research is that a crash may not be the
result of a design defect at all. To find a design defect, the plaintiff would be
required to demonstrate evidence that the accident was proximately caused
by a decision that the AI made, that should have been anticipated and
tested for.31 Such a showing seems quite difficult, both conceptually and as a
matter of proof.32 Autonomous vehicles will face sudden unexpected
changes: children or pets darting out into the street, drivers who break
traffic laws or stop very suddenly, or other drivers misapprehending what
the automated vehicle itself will do and reacting badly.33 Each of these will

    28  Karnow, supra note 17, at 69.
    29  Geistfeld, supra note 11, at 1633; Hubbard, supra note 27, at 1854; see also Frances E.
Zollers et al., No More Soft Landings for Software: Liability for Defects in an Industry That Has Come
of Age, 21 SANTA CLARA COMPUTER & HIGH TECH. L.J. 745, 749 (2005) (“Software can
only fail for one reason: faulty design.”)
     30 See Geistfeld, supra note 11, at 1633.
     31 Note that failing to avoid the reasonably avoidable crashes is essentially the

definition of a design defect for an autonomous vehicle. For this reason, the proximate
cause question and defect question are essentially the same. See David A. Fischer, Products
Liability-Proximate Cause, Intervening Cause, and Duty, 52 MO. L. REV. 547, 559–60 (1987)
(arguing that whether a product was defective is often the flipped question of whether there
was an intervening cause for the injury).
     32 Kyle Graham, Of Frightened Horses and Autonomous Vehicles: Tort Law and Its Assimilation

of Innovations, 52 SANTA CLARA L. REV. 1241, 1270 (2012) (arguing that plaintiffs may be
prevented from recovering because doing so would require expensive and searching review
of the code).
     33 See generally Harry Surden & Mary-Anne Williams, Technological Opacity, Predictability,
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                        3/11/19

10                               Boston University Law Review                     [Vol. 100:XX

be unique in some way—the timing, the angle the child runs in at—such
that the machine cannot possibly train on all of them. Yet the machine will
be asked to dynamically handle all these scenarios. Some scholars have
argued that the manufacturer will often lose the cost-benefit argument,
when, in hindsight, the cost of testing just one more scenario is marginal,
and the damage that results is loss of life and limb.34 But as they have also
noted, the test addresses what the programmer could reasonably have
known to test for before the crash.35 It seems unreasonable to rely on
hindsight to declare that out of the infinitely many possible fact patterns, the
one that happened to lead to a crash should have been specifically
anticipated.36 That would be functionally no different than strict liability for
any crash caused by the car, and seems unlikely a court would require
that.37 The reason that this is more difficult conceptually than a defect in a
typical product, is that the purpose of this product is to anticipate and

and Self-Driving Cars, 38 CARDOZO L. REV. 121 (2016) (discussing the difficulties that arise
because we lack a “theory of mind” about autonomous vehicles).
      34 See Gary E. Marchant & Rachel A. Lindor, The Coming Collision between Autonomous

Vehicles and the Liability System 52 SANTA CLARA L. REV. 1321, 1344 (2012); see also Hubbard,
supra note 27, at 1854 (“[B]ecause there is literally evidence of an alternative design for
coding, there is a good argument that the plaintiff has satisfied the burden of showing a
reasonable alternative design and that using the alternative design would be less expensive
than the injury costs avoided by its use.”).
      35 Hubbard, supra note 27, at 1854–55.
      36 See id. (“[W]ith more than 100 million lines of software code in a modern

automobile, it is unclear whether plaintiffs should be able to rely solely on the existence of
the error and of a way to fix the error available at the time of trial but not necessarily
reasonably available at the time of sale. Arguably, expert testimony of reasonably attainable
error elimination at the time of design and sale should also be required.”); see also Smart,
Grimm & Hartzog, supra note 2 (manuscript at 3).
      37 Mark Geistfeld has argued that if the crash is due to a bug in the code, the

manufacturer could be liable under the malfunction doctrine, Geistfeld, supra note 11, at
1634, which applies to “situations in which a product fails to perform its manifestly
intended function.” Restatement, Product Liability § 3 cmt. b. But it is mathematically
impossible to test for every bug in a computer model, see Deven R. Desai & Joshua A.
Kroll, Trust but Verify: A Guide to Algorithms and the Law, 31 HARV. J.L. & TECH. 1, 31 (2017)
(discussing the halting problem), so unless we want strict liability for bugs, it is unclear how
the malfunction doctrine should apply to software. Ryan J. Duplechin, The Emerging
Intersection of Products Liability, Cybersecurity, and Autonomous Vehicles, 85 TENN. L. REV. 803, 825
(2018).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                      3/11/19

2020]                   NEGLIGENCE AND AI’S HUMAN USERS                                       11

respond to unknown scenarios, so there is no stable sense of what the AI
working properly looks like.
         Despite being well understood, then, the liability issue is
unresolvable in the abstract.38 This has led scholars to propose a number of
solutions, including strict liability,39 no-fault insurance,40 respondeat superior
applied to autonomous robots,41 new legislation delineating fault,42 finding
vehicles not defective where aggregate data shows that a car is at least twice
as safe as human drivers,43 and reinvigorating crashworthiness doctrine.44 A
minority argue that the law will work as it is.45 The proper response to the
uncertainty surrounding liability is the chief debate in the literature on tort
and AI.
         A second theme in this work is innovation. Because autonomous
vehicles are seen as a product that will save lives, there is great concern
about whether uncertain tort liability will hinder innovation. Many of the
articles have called for legal modifications to protect manufacturers,46 while
others are more optimistic about the present balance between tort and
innovation, concluding that traditional tort law adapt adequately to protect
industry.47 As Bryant Walker Smith has noted, the literature often treats the

    38   Cf. Smith, supra note 17, at 32 (“The abstract question of ‘who is responsible in a
crash’ is as unhelpful as it is popular.”).
     39 Sophia H. Duffy & Jamie Patrick Hopkins, Sit, Stay, Drive: The Future of Autonomous Car

Liability, 16 SMU SCI. & TECH. L. REV. 453, 471–73 (2013); David C. Vladeck, Machines
Without Principals: Liability Rules and Artificial Intelligence, 89 WASH. L. REV. 117, 146 (2014).
     40 Kevin Funkhouser, Paving the Road Ahead: Autonomous Vehicles, Products Liability, and the

Need for A New Approach, 2013 UTAH L. REV. 437, 458–62 (2013).
     41 SAMIR CHOPRA & LAURENCE F. WHITE, A LEGAL THEORY FOR AUTONOMOUS

ARTIFICIAL AGENTS 119–91 (2011).
     42 Jeffrey K. Gurney, Sue My Car Not Me: Products Liability and Accidents Involving

Autonomous Vehicles, 2013 U. ILL. J.L. TECH. & POL’Y 247, 276–77.
     43 Geistfeld, supra note 11, at 1653.
     44 Choi, supra note 11, at *31–49.
     45 Hubbard, supra note 27, at 1865–66; Smith, supra note 17, at 2.
     46 Ryan Abbott, The Reasonable Computer, 86 G.W.U. L. REV. 1, 44–45 (2018); Gurney,

supra note 42, at 277; Funkhouser, supra note 40, at 458–62; Marchant & Lindor, supra note
34, at 1339–40.
     47 Geistfeld, supra note 11; Graham, supra note 32, at 1270; Hubbard, supra note 27, at

1865–66 (arguing that proposals to fundamentally change how liability works with respect
to autonomous vehicles—in favor of either plaintiffs or defendants—inappropriately
assume that something is wrong with the current balance); Smith, supra note 17, at 2.
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                               3/11/19

12                           Boston University Law Review                 [Vol. 100:XX

question of liability as “an obstacle to be removed, the object of
consternation rather than contemplation.”48

                                    B. Sidelining Drivers

         The research discussed above is concerned primarily—almost
exclusively—with fully automated vehicles. But there is an important
difference between partly and fully automated vehicles. Today’s
“autonomous” vehicles all include a human driver—usually called a “safety
driver”49—to potentially perform a range of driving tasks. The National
Highway and Traffic Safety Administration (NHSTA) has adopted a
classification system based on different levels of autonomy.50 At Level 0, the
car is fully manual, though it may include “intermittent warning systems like
blind-spot detection.”51 Level 1 includes a single automated aspect,
including steering and acceleration, so these are familiar technologies that
include “parking assist, which only controls steering, or adaptive cruise
control (ACC) that only adjusts speed.”52 Level 2 includes systems that
combine steering and acceleration. As Jonathan Ramsey has explained,
“[u]nder all of these level definitions, the driver is still charged with
monitoring the environment.”53
         Level 3 is the first level that can be called automated driving in any
meaningful sense. Level 3 vehicles monitor the entire environment and “can
make informed decisions for themselves such as overtaking slower moving
vehicles. However, unlike the higher rated autonomous vehicles, human

     48 Smith, supra note 17, at 2.
     49  Dana Hull, Mark Bergen & Gabrielle Coppola, Uber Crash Highlights Odd Job:
Autonomous       Vehicle    Safety  Driver,  BLOOMBERG           (Mar.       23,     2018),
https://www.bloomberg.com/news/articles/2018-03-23/uber-crash-highlights-odd-job-
autonomous-vehicle-safety-driver.
     50 NAT’L HIGHWAY TRAFFIC S AFETY ADMIN., FEDERAL AUTOMATED VEHICLES

POLICY: ACCELERATING THE NEXT REVOLUTION IN ROADWAY SAFETY 9 (2016),
https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20
PDF.pdf.
     51 Jonathan Ramsey, The Way We Talk About Autonomy Is a Lie, and That's Dangerous, THE

DRIVE (Mar. 8, 2017), http://www.thedrive.com/tech/7324/the-way-we-talk-about-
autonomy-is-a-lie-and-thats-dangerous.
     52 Id.
     53 Id.
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                     3/11/19

2020]                  NEGLIGENCE AND AI’S HUMAN USERS                                       13

override is required when the machine is unable to execute the task at hand
or the system fails.”54 At Levels 4 and 5, no driver input is required. Level 4
vehicles can intervene if something goes wrong and self-correct. The only
limitation of Level 4 is that it only applies in certain situations, like
highways. That restriction is lifted in Level 5, where a vehicle is expected to
be able to do everything a human can now, including, for example, off-
roading.55 There are currently no Level 4 or 5 cars on the market.56
        While most of the scholarship recognizes that there is a difference
between partly and fully autonomous vehicles, the manufacturers are the
central characters, and the drivers barely register.57 Because the articles are
about the move toward product liability or the concerns about innovation,
the focus on the creation of AI makes complete sense. But tort claims arising
from the use rather than creation of AI will be subject to negligence analysis
rather than product liability, and require their own analysis.
        Drivers do sometimes appear in the discussion of product liability.
Where warning defects are discussed, driver must be, because that is who
the warnings are for.58 Drivers also make appearances as the yardstick to
measure how much liability should move to the manufacturers—whether
the manufacturer should be wholly or only partially responsible—while
assuming that the actual negligence analysis remains unchanged.59 Even
Jeffrey Gurney focuses entirely on product liability, while making four
versions of drivers (“distracted,” “diminished capabilities,” “disabled,” and

    54   Jonathan Dyble, Understanding SAE Automated Driving – Levels 0 To 5 Explained,
GIGABIT (Apr. 23, 2018).
     55 Id.
     56 Ramsey, supra note 51.
     57 See, e.g. Marchant & Lindor, supra note 34, at 1326 (“Autonomous vehicles are likely

to change the dynamics of who may be held liable. In considering these changes, it is first
necessary to distinguish partial autonomous vehicles from completely autonomous
vehicles.”)
     58 Gurney, supra note 42, at 264.
     59 See, e.g., Marchant & Lindor, supra note 34, at 1326 (“These partial autonomous

systems will shift some, but not all, of the responsibility for accident avoidance from the
driver to the vehicle, presumably reducing the risk of accidents (since that is the very
purpose of the system).”); Sophia H. Duffy & Jamie Patrick Hopkins, Sit, Stay, Drive: The
Future of Autonomous Car Liability, 16 SMU SCI. & TECH. L. REV. 453, 457 (2013) (“Driver
liability is relatively straightforward and requires little explanation: driver is liable for his
own actions in causing an accident, such as negligent or reckless operation of the vehicle.”);
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                   3/11/19

14                             Boston University Law Review                   [Vol. 100:XX

“attentive”) the centerpiece of his argument.60 Just as often, an article will
eliminate the driver entirely, discussing AI as something deserving of agency
or personhood,61 or proposing changes to doctrine to apply negligence or
ascribe reasonableness to a computer.62
         Where negligence for driver-caused injuries is acknowledged, little
analysis attends it. Gary Marchant and Rachel Lindor note that if the user
ignores the manual’s warnings about limiting the vehicle’s use in certain
weather, or the driver fails to switch on or off autonomous mode
appropriately, he may be found negligent, but then they argue that most of
the time, the driver is “unlikely to be a factor.”63 Ignacio Cofone refers to
negligent supervision as a possibility, treating AI as a child.64 Hubbard
briefly notes that reasonable use of sophisticated robot may require special
skill.65 He argues that “in order to satisfy the standard of reasonable care,
users of driverless cars would need to use the skills necessary to operate the
car reasonably, by, for example, knowing when the driving system was
malfunctioning and, to some extent, how to respond to the malfunction.”66
This is the most detailed analysis of negligent driving in an automated
vehicle that exists so far. Part II.A will explain why it falls short.
         The one area where negligence for AI use has been discussed in
limited fashion is the medical AI context.67 Medical AI is the other most
common form of AI that can end with physical injuries. But so far, the

     60  Gurney, supra note 42, at 257–71.
     61  CHOPRA & WHITE, supra note 41, at 153–93; Lawrence B. Solum, Legal Personhood for
Artificial Intelligences, 70 N.C. L. REV. 1231, 1232 (1992).
      62 Abbott, supra note 46, at 22–24 (2018); Karni Chagal-Feferkorn, The Reasonable

Algorithm, 2018 U. ILL. J.L. TECH. & POL’Y 111.
      63 Marchant & Lindor, supra note 34, at 1327
      64 Ignacio N. Cofone, Servers and Waiters: What Matters in the Law of A.I., 21 STAN. TECH.

L. REV. 167, 191 (2018)
      65 Hubbard, supra note 27, at 1861.
      66 Id.
      67 W. Nicholson Price II, Medical Malpractice and Black-Box Medicine, in in BIG DATA,

HEALTH LAW, AND BIOETHICS 295, 300–01 (I. Glenn Cohen et al., eds. 2018); A. Michael.
Froomkin, Ian Kerr & Joëlle Pineau, When AIs Outperform Doctors: The Dangers of a Tort-Induced
Over-Reliance on Machine Learning and What (Not) to Do About It (working paper),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3114347 (manuscript at 15); Jeffrey
M. Senger & Patrick O’Leary, Big Data and Human Medical Judgment, in BIG DATA, HEALTH
LAW, AND BIOETHICS supra.
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                    3/11/19

2020]                  NEGLIGENCE AND AI’S HUMAN USERS                                      15

analysis treats the issue as one of malpractice specifically, rather than
negligence more generally. In the next Part, drawing on examples from the
medical context as well as a few others, I examine challenges that the use of
AI generally poses for negligence law.

                         II. HOW AI CHALLENGES NEGLIGENCE

        Outside of the realm of autonomous vehicles, AI today is most
commonly seen as a tool to help people make decisions. Most of its uses—in
employment, credit, criminal justice—if regulated at all, are not regulated
by tort law. But AI is reaching into every aspect of society, and it should not
be surprising that it has also entered several domains that are regulated at
least partially by negligence, including medical malpractice,68 data
security,69 negligent investment advice,70 as well as car accidents in partially-
autonomous vehicles.71
        It is therefore important to understand how tort law interacts with
the use of AI, not just its creation. Whether one believes that tort law is
principally concerned with efficiently and fairly allocating the costs of
accidents,72 compensating individual people for their losses,73 or correcting

    68  See Price, supra note 67.
    69   See William McGeveran, The Duty of Data Security, 102 MINN. L. REV. __
(forthcoming        2018),    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3198872;
Daniel J. Solove & Woodrow Hartzog, The FTC and the New Common Law of Privacy, 114
COLUM. L. REV. 583, 643 (2014).
     70 See generally Seth E. Lipner & Lisa A. Catalano, The Tort of Giving Negligent Investment

Advice, 39 U. MEM. L. REV. 663, 668 (2009).
     71 Hull, Bergen & Coppola, supra note 49.
     72 See generally GUIDO CALABRESI, THE COSTS OF ACCIDENTS (1970); STEVEN

SHAVELL, ECONOMIC ANALYSIS OF ACCIDENT LAW (1987); WILLIAM M. LANDES &
RICHARD A. POSNER, THE ECONOMIC STRUCTURE OF TORT LAW (1987); Guido
Calabresi & A. Douglas Melamed, Property Rules, Liability Rules, and Inalienability: One View of
The Cathedral, 85 HARV. L. REV. 1089 (1972).
     73 See generally JULES L. COLEMAN, RISKS AND WRONGS (1992); ERNEST J. WEINRIB,

THE IDEA OF PRIVATE LAW (1995); Richard A. Epstein, A Theory of Strict Liability, 2 J.
LEGAL STUD. 151 (1973); George P. Fletcher, Fairness and Utility in Tort Theory, 85 HARV. L.
REV. 537 (1972); Stephen R. Perry, The Moral Foundations of Tort Law, 77 IOWA L. REV. 449
(1992).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                             3/11/19

16                           Boston University Law Review               [Vol. 100:XX

legal wrongs,74 the nexus between the decisions people make, the results of
those decisions, and the consequences is foundational.75 While a decision-
assistance technology cannot harm people directly, it can significantly
interfere with decision processes. AI inserts into decision-making a layer of
complex, often inscrutable, computation that substitutes statistical
decisionmaking for more individualized reasoning and often discovers
unintuitive relationships on which to base the decisions.76 The troubling
question for negligence law, then, is how the insertion of AI changes the
decisionmaking process, and whether those changes fundamentally alter the
ability of tort law to achieve its regulatory goals.
         In this Part, I explore the consequences of thinking about how users
interact with AI, and the challenges the interface poses for the negligence
regime that regulates these situations. I identify four challenges new to AI.
The first challenge is epistemic: AI often aims to go beyond human
comprehension, making error detection impossible in the moment, and the
specific errors unforeseeable. The second challenge is to human capacity:
The average person will have limited physical and mental abilities that will
be exposed by interaction with machines, with potentially harmful results.
The third challenge is adversarial: AI introduces operational security
concerns into decisions that were not previously mediated by software.
Software vulnerabilities are something that negligence doctrine has never
addressed well, and AI expands their reach. The fourth challenge is
distributional: By subsisting statistical reasoning for individual, and by
changing its output based on prospective plaintiffs, AI creates new openings
for bias to enter individual tort cases.

                              A. Unforeseeability of AI Errors

       Negligence is defined as the failure to exercise reasonable care
toward another, where that failure results in injury to that other.77 Thus, the

     74John C.P. Goldberg & Benjamin C. Zipursky, Torts as Wrongs, 88 TEX. L. REV. 917,
918 (2010).
    75 See e.g., ERNEST WEINRIB, THE IDEA OF PRIVATE LAW 73–81 (2d. ed. 2012).
    76 Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87

FORDHAM L. REV. 1085, 1089–99 (2018).
    77 JOHN C.P. GOLDBERG, ANTHONY SEBOK & BENJAMIN C. ZIPURSKY, TORT LAW:

RESPONSIBILITY AND REDRESS 47 (2d. ed. 2008).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                    3/11/19

2020]                  NEGLIGENCE AND AI’S HUMAN USERS                                      17

ability for a user of technology to determine what reasonable care is is
central to negligence liability. This is why Hubbard wrote that “in order to
satisfy the standard of reasonable care, users of driverless cars would need to
use the skills necessary to operate the car reasonably, by, for example,
knowing when the driving system was malfunctioning. . . .”78 But this may
not be possible for AI decision-assistance systems.
         Decision-assistance systems often try surpass human understanding,
rather than recreate it. This is an important distinction between what can be
thought of as two categories of AI. If the user of the AI system cannot
reasonably determine in real time whether the AI is making an error, it is
unclear how the user can satisfy a duty of care in the operation of the AI.
No matter how high the threshold of care is in the breach calculus, it may
simply be impossible for the AI user to know which side of the line he is on.
In many of these applications, expecting such knowledge is unreasonable, as
it may be impossible.79
         This problem can be described as version of unforeseeability, in this
case, a claim that specific AI errors are unforeseeable. Foreseeability is a
central component of all legal liability. It is basic principle of tort law that “a

    78  Hubbard, supra note 27, at 1861.
    79   Though product liability is not the focus of this Article, it is worth noting that this
distinction is also—and perhaps more obviously—important for product testing. See, e.g.,
Price, Black-Box Medicine, supra note 95, at 440. It affects the ability of manufacturers to
claim that they took reasonable measures to ensure safety, which is an essential component.
See Restatement, Products Liability § 2 cmts. m–n. While this is true of all new technologies
to an extent, AI does present some challenges over and above traditional technologies.
Whereas with normal machines, one can take them apart and test the parts, and one can
examine the mechanical diagrams to understand how the machine should work, AI is
rarely decomposable. See Lipton, supra note 90, at 98–99 (discussing simulatability,
decomposability, and transparency). AI’s results are often otherwise uninterpretable, or
based on nonintuitive relationships that are difficult for humans to evaluate normatively. See
generally Selbst & Barocas, supra note 4, at 1117–29. Though testing is challenging, however,
there is a lively area of research on interpretability and/or explainability within the field of
computer science, id at 1109–17, practitioners are thinking through risk analyses where
explanation is not possible, see https://fpf.org/wp-content/uploads/2018/06/Beyond-
Explainability.pdf, and product liability has encountered products before for which the
makers do not completely understand how they work, so this issue may come to a
resolution. See David G. Owen, Bending Nature, Bending Law, 62 FLA. L. REV. 569, 574–80
(2010). The most commonly cited instance of this is drugs, see id. at 574; see also Lars Noah,
This Is Your Products Liability Restatement on Drugs, 74 BROOK. L. REV. 839, 842 (2009).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                      3/11/19

18                              Boston University Law Review                    [Vol. 100:XX

defendant is responsible for and only for such harm as he could reasonably
have foreseen and prevented.”80 Though the actual doctrine is a “a vexing,
crisscrossed morass” that is impossible to pin down,81 it is still conceptually
central to the moral underpinnings of tort.82

     1. AI Errors May Be Unforeseeable

        AI systems can be divided into groups, based on two different high-
level goals. One is to find hidden patterns in order to predict relationships
that humans cannot predict in an unaided fashion. The other is to replicate
human capabilities, but faster, more reliably, and machine-readably. The
tasks of autonomous vehicles are an example of the latter. The primary AI
in autonomous vehicles is a machine vision system.83 While it is often
supplemented by a broader range of signals than the visual spectrum,
potentially including LIDAR, radar, or ultrasonic sensors,84 it
fundamentally seeks to replicate the function of human vision systems. If a
machine vision system is shown a picture of a dog, a bus, or a crosswalk, it
will either correctly identify the dog, bus, or crosswalk or it will not. A
human can check the machine because “dog,” “bus,” and “crosswalk” are
categories that humans understand innately and can differentiate from a
background image easily.85 (This is why Google’s reCAPTCHA service
presents so many pictures of items on roads; we are collectively training

     80 H.L.A. HART & TONY HONORÉ, CAUSATION IN THE LAW 255 (2d ed. 1985).
     81 W. Jonathan Cardi, Purging Foreseeability: The New Vision of Duty and Judicial Power in the
Proposed Restatement (Third) of Torts, 58 VAND. L. REV. 739, 740 (2005).
     82 David G. Owen, Figuring Foreseeability, 44 WAKE FOREST L. REV. 1277, 1277–78

(2009).
     83 Benjamin Ranft & Christoph Stiller, The Role of Machine Vision for Intelligent Vehicles, 1

IEEE TRANSACTIONS ON INTELLIGENT VEHICLES 8 (2016).
     84 Id. at 8.
     85 Cf. Selbst & Barocas, supra note 4, at 1124 (discussing that the example of an AI

differentiating between wolves and huskies based on snow in the background is seen as
absurd precisely because “snow,” “wolf,” and “husky” are legible categories to humans,
and the presence of snow is known not to be a property of a wolf (citing Marco Tulio
Ribeiro et al., “Why Should I Trust You?” Explaining the Predictions of Any Classifier, in
PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON
KNOWLEDGE DISCOVERY AND DATA MINING 1135 (2016)).
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                         3/11/19

2020]                    NEGLIGENCE AND AI’S HUMAN USERS                                           19

Google’s self-driving AI.86) The same goes for the act of driving. The
machine is attempting to replicate a human activity—driving—and does so
by avoiding the same kinds of objects that humans are attempting to avoid,
but doing it better. If the car hits something, it is clearly an error to anyone
watching.
        This is not a universal property of machine learning models.
Machine intelligence is fundamentally alien,87 and often, the entire purpose
of an AI system is to learn to do or see things in ways humans cannot. A
basic, but common example that illustrates AI’s strangeness is spam filtering.
AI systems learn by example. By showing a computer many examples of a
phenomenon, it can learn the characteristics of different examples that are
labeled as corresponding to different outcomes. It would be nearly
impossible for a person to try and write out rules for word choice, tone,
grammar errors and other properties that constitute “spam,” but by flagging
every spam email we see (and by presuming all others are not spam), we
provide labels to a machine so that it can find these patterns that predict
likely spam.88 These rules may not be a perfect definition, and people may
not even agree on the total set of rules that would be such a perfect
definition, but with enough data, the machine can create a good
approximation. But, as sociologist Jenna Burrell has pointed out, whereas
humans would likely categorize spam in terms of topics—“the phishing
scam, the Nigerian 419 email, the Viagra sales pitch”—computers use a
“bag of words” approach based on the appearance of certain words with
certain frequencies gleaned by seeing millions upon millions of labeled

     86  Michael Lotkowski, You Are Building a Self Driving AI Without Even Knowing About It,
HACKERNOON (Feb. 27, 2017), https://hackernoon.com/you-are-building-a-self-driving-
ai-without-even-knowing-about-it-62fadbfa5fdf.
      87 See, e.g., Ed Felten, Guardians, Job Stealers, Bureaucrats, or Robot Overlords, 2018 Grafstein

Lecture @ 36:30, https://youtu.be/DuQLeZ9Fr4U?t=2177 (“[M]achine mistakes and
human mistakes are just very different, and it’s indicative of differences in how machines
versus people think. So AI errors won’t be like human errors. . . .”); see also
https://law.duke.edu/sites/default/files/centers/cip/ai-in-admin-state_felten_slides.pdf
(slide 14); David Weinberger, Our Machines Now Have Knowledge We’ll Never Understand, WIRED
(April 18, 2017) (“[T]he issue is not simply that we cannot fathom them, the way a lay
person can’t fathom a string theorist’s ideas. Rather, it’s that the nature of computer-based
justification is not at all like human justification. It is alien.”)
      88 Jenna Burrell, How the Machine “Thinks”: Understanding Opacity in Machine Learning

Algorithms, BIG DATA & SOC. 1, 7–9, Jan.–Jun. 2016.
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                           3/11/19

20                                Boston University Law Review                       [Vol. 100:XX

examples of spam.89
         Importantly, even if humans could write down a large list of rules to
define spam, this is not the way we would approach the problem. As a
result, even understanding the automatically generated rules or the reasons
for them to look like they do is difficult. This is a phenomenon that
computer scientists have for decades referred to as the “interpretability”
problem.90 Asking for an explanation of how the system works will often
invite a reply of “that’s what the data says,” or a breakdown of which words
with which frequencies contribute to the end result.91 But as Burrell puts it,
this is “at best incomplete and at worst false reassurance,”92 because it does
not really tell us anything actionable.
         With this background, consider AI in three contexts: medicine,
finance, and data security. In medicine, AI is increasingly being used to
predict things that even well-trained humans (doctors) cannot.93 Early uses
of AI in medicine were aimed at identifying high- and low-risk patients in
different contexts.94 But now, people are developing AI tools that will
diagnose patients or recommend treatment.95 These tools are predicted to

     89  Id. at 9.
     90   See Zachary C. Lipton, The Mythos of Model Interpretability, PROC. 2016 ICML
WORKSHOP ON HUMAN INTERPRETABILITY IN MACHINE LEARNING 96, 98–99 (discussing
simulatability, decomposability, and transparency).
      91 Burrell, supra note 88, at 9.
      92 Id.
      93 See W. Nicholson Price II, Artificial Intelligence in Health Care Applications and Legal Issues,

ABA SCITECH LAW., Fall 2017, at 10, 10; Katie Chockley & Ezekiel Emanuel, The End of
Radiology? Three Threats to the Future Practice of Radiology,13 J. AM. COLL. RADIOLOGY 1415
(2016)              (discussing           ML              advances            in            radiology);
https://www.technologyreview.com/s/604271/deep-learning-is-a-black-box-but-health-
care-wont-mind/.
      94 I. Glenn Cohen et al., The Legal and Ethical Concerns that Arise from Using Complex

Predictive Analytics in Health Care, HEALTH AFF. July 2014, at 1139, 1140; Rich Caruana et
al., Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission, in
Proceedings of the 21th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining 1721 (2015).
      95 Jane R. Bambauer, Dr. Robot, 51 U.C. DAVIS L. REV. 383, 387 (2017); W.

Nicholson Price II, Regulating Black-Box Medicine, 116 MICH. L. REV. 421, 425–26 (2017)
[hereinafter Price, Regulating]; W. Nicholson Price II, Black-Box Medicine, 28 HARV. J.L. &
TECH. 419, 426 (2015); [hereinafter Price, Black-Box Medicine];
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION                                      3/11/19

2020]                      NEGLIGENCE AND AI’S HUMAN USERS                                        21

become better than doctors as a general matter.96 Like the finance example,
medical diagnostic and treatment tools seek to find and take advantage of
patterns that humans would not otherwise recognize. Of course, there is
great risk here; a misdiagnosis or mistreatment can be fatal, and at least in
the case of IBM’s Watson, it took just 14 months to get from extreme hype
to extreme disappointment.97
        Moving to finance, a robo-advisor is an automated or semi-
automated service that offers advice about investments, insurance, or
credit.98 Most robo-advisors aim to help people without large sums of
money automatically build an investment portfolio and automatically
rebalance it as needed.99 Additionally, it is well-known that most people who
actively trade in the stock market lose money because the stock market is so
inherently unpredictable and humans trade emotionally. This seems like a
use good case for AI.100 The model is well-tested—machine learning
techniques to predict markets have been around since at least the early
2000s,101 and are now used by the majority of hedge funds.102 But of course,

       96   Froomkin, Kerr & Pineau, supra note 67, at 15; Senger & O’Leary, supra note 67, at
291.
        Compare Mallory Locklear IBM’s Watson Is Really Good at Creating Cancer Treatment Plans,
       97

ENGADGET (June 1, 2017), https://www.engadget.com/2017/06/01/ibm-watson-cancer-
treatment-plans/ with Angela Chen, IBM’s Watson Gave Unsafe Recommendations for Treating
Cancer,             THE               VERGE                  (July             26              2018),
https://www.theverge.com/2018/7/26/17619382/ibms-watson-cancer-ai-healthcare-
science.
     98 Tom Baker & Benedict Dellaert, Regulating Robo Advice Across the Financial Services

Industry, 103 IOWA L. REV. 713, 719–20 (2018).
     99 U.S. SEC. & EXCH. COMM’N, DIV. OF INV. MGMT., GUIDANCE UPDATE: NO. 2017-

02 (2017).
     100 Ayn de Jesus, Robo-advisors and Artificial Intelligence – Comparing 5 Current Apps, TECH

EMERGENCE (July 6, 2018), https://www.techemergence.com/robo-advisors-artificial-
intelligence-comparing-5-current-apps/.
     101 See Vatsal H. Shah, Machine Learning Techniques for Stock Prediction, FOUNDATIONS OF

MACHINE LEARNING, Spring 2007, 1; Paul D. Yoo, Maria H. Kim & Tony Jan, Machine
Learning Techniques and Use of Event Information for Stock Market Prediction: A Survey and Evaluation,
PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTATIONAL
INTELLIGENCE FOR MODELING, CONTROL AND AUTOMATION (CIMCA 2005), at 835.
     102 Amy Whyte, More Hedge Funds Using AI, Machine Learning, THE INSTITUTIONAL

INVESTOR                           (July                           18,                         2018),
https://www.institutionalinvestor.com/article/b194hm1kjbvd37/More-Hedge-Funds-
You can also read