AI REGULATION: WHAT YOU NEED TO KNOW TO STAY AHEAD OF THE CURVE - BY PETER J. SCHILDKRAUT

Page created by Rodney Fox
 
CONTINUE READING
AI REGULATION: WHAT YOU NEED TO KNOW TO STAY AHEAD OF THE CURVE - BY PETER J. SCHILDKRAUT
AI REGULATION: WHAT YOU NEED TO
KNOW TO STAY AHEAD OF THE CURVE
BY PETER J. SCHILDKRAUT

                          arnoldporter.com
AI REGULATION: WHAT YOU NEED TO KNOW TO STAY AHEAD OF THE CURVE - BY PETER J. SCHILDKRAUT
Overview ___________________________________________1
Table of Contents   What Is AI?________________________________________1
                    Machine Learning ___________________________________1
                    Why Regulate AI?___________________________________2
                    Accuracy and Bias __________________________________2
                    Power_____________________________________________2
                    Market Failures_____________________________________3
                    How Is AI Regulated?—Application of Familiar
                    Regulatory Regimes_______________________________3
                    Generally Applicable Law_____________________________3
                    Express Regulation of AI_____________________________4
                    How Will AI Be Regulated?—Development of
                    New Regulatory Regimes___________________________4
                    US Steps Toward Regulating AI________________________5
                    The Trump Administration______________________________5
                    What Will Happen Under the Biden Administration___________5
                    European Steps Toward Regulating AI__________________6
                    The European Commission_____________________________6
                    The European Parliament______________________________8
                    The United Kingdom__________________________________8
                    How Can You Keep Your Company Ahead of the Curve?____8
                    Risk Assessment and Mitigation_______________________9
                    Assessment_________________________________________9
                    Mitigation Through Explanation__________________________9
                    Mitigating Bias_______________________________________10
                    Other Considerations: Trade-Offs and Outsourcing__________10
                    Oversight__________________________________________10
                    Process and Structure_________________________________10
                    Explainability for Oversight_____________________________11
                    Documentation_____________________________________11
                    Requirements of Regulations and Standards_______________11
                    Defensive Document Retention__________________________11
                    Conclusion_________________________________________12
                    Endnotes__________________________________________12

                    An abridged version of this report was previously published in Bloomberg Law.
AI REGULATION: WHAT YOU NEED TO KNOW TO STAY AHEAD OF THE CURVE - BY PETER J. SCHILDKRAUT
AI Regulation: What You Need to Know
to Stay Ahead of the Curve
By Peter J. Schildkraut1
Artificial intelligence (AI) is all around us. AI powers        Machine Learning
Alexa, Google Assistant, Siri, and other digital
                                                                One type of AI, machine learning, has enabled the
assistants. AI makes sense of our natural language
                                                                recent explosion of AI applications. “Machine learning
searches to deliver (we hope) the optimal results. When
                                                                systems learn from past data by identifying patterns and
we chat with a company representative on a website,
                                                                correlations within it.”5 Whereas traditional software,
we often are chatting with an AI system (at least at first).
                                                                and some other types of AI, run particular inputs
AI has defeated the (human) world champions of chess
                                                                through a preprogrammed model or a set of rules and
and Go.2 AI is advancing diagnostic medicine, driving
                                                                reach a defined result (akin to 2+2=4), a machine
cars and making all types of risk assessments. AI even
                                                                learning system builds its own model (the patterns
enables the predictive coding that has made document
                                                                and correlations) from the data it is trained upon. The
review more efficient. Yet, if you’re like one chief legal
                                                                system then can apply the model to make predictions
officer I know, AI remains on your list of things you need
                                                                about new data. Algorithms are “now probabilistic. We
to learn about.
                                                                are not asking computers to produce a defined result
Now is the time! Right or wrong, there are growing              every time, but to produce an undefined result based on
calls for more government oversight of technology.              general rules. In other words, we are asking computers
As AI becomes more common, more powerful, and                   to make a guess.”6
more influential in our societies and our economies, it
                                                                To take an example from legal practice, in a technology-
is catching the attention of legislators and regulators.
                                                                assisted document review, lawyers will code a small
When a prominent tech CEO like Google’s Sundar
                                                                sample of the document collection as responsive or not
Pichai publicly proclaims “there is no question in my
                                                                responsive. The machine learning system will identify
mind that artificial intelligence needs to be regulated,”
                                                                patterns and correlations distinguishing the sample
the questions are when and how—not whether—AI will
                                                                documents that were coded “responsive” from those
be regulated.3
                                                                coded “not responsive.” It then can predict whether
Indeed, certain aspects of AI already are regulated, and        any new document is responsive and measure the
the pace of regulatory developments is accelerating.            model’s confidence in its prediction. For validation, the
What do you need to know—and what steps can your                lawyers will review the predictions for another sample of
company take—to stay ahead of this curve?                       documents, and the system will refine its model with the
                                                                lawyers’ corrections. The process will iterate until the
What Is AI?                                                     lawyers are satisfied with the model’s accuracy. At that
Before plunging into the present and future of AI               point, the lawyers can use the system to code the entire
regulation, let’s review what AI is and how the leading         document collection for responsiveness with whatever
type works. There are many different definitions of AI,         human quality control they desire.
but experts broadly conceive of two versions, narrow (or        The quality of the training data set matters greatly. The
weak) and general (or strong). All existing AI is narrow,       machine learning system assumes the accuracy of what
meaning that it can perform one particular function.            it is told about the training data. In the document review
General AI (also termed “artificial general intelligence”       example, if, when the lawyers train the system, they
(AGI)) can perform any task and adapt to any situation.         incorrectly code every email written by a salesperson
AGI would be as flexible as human intelligence and,             as responsive, they will bias the model towards
theoretically, could improve itself until it far surpasses      predicting that every sales team email in the collection
human capabilities. For now, AGI remains in the realm           is responsive. Note that I did not say they will train the
of science fiction, and authorities disagree on whether
AGI is even possible. While serious people and
organizations do ponder how to regulate AGI4—in case                      THE QUESTIONS ARE WHEN
someone creates it—current regulatory initiatives focus                   AND HOW—NOT WHETHER—
on narrow AI.
                                                                          AI WILL BE REGULATED.
                                                      AI Regulation | 1
AI REGULATION: WHAT YOU NEED TO KNOW TO STAY AHEAD OF THE CURVE - BY PETER J. SCHILDKRAUT
model to identify every sales team email as responsive.
The miscoded training data will increase the probability                    GOVERNMENTS ALSO MAY
that the model will predict any given sales team email                      TURN TO REGULATION
is responsive, but they will not make this a certainty.
Other things about an email might overcome the bias.                        AS A PROPHYLACTIC TO
For instance, the lawyers may have coded every email                        SUPPLEMENT THE TORT
about medical appointments as nonresponsive. As a
result, the model might nevertheless predict that an
                                                                            SYSTEM, AS MANY COUNTRIES
email about a medical appointment is nonresponsive                          HAVE DONE IN AREAS SUCH
even if it comes from a salesperson.
                                                                            AS FOOD AND CONSUMER
Why Regulate AI?                                                            PRODUCT SAFETY.
Several characteristics of AI drive the calls for regulation.
                                                                  Technology’s promise is to help us escape the biases all
Accuracy and Bias                                                 people have. Instead, however, these examples show
AI predictions are sometimes inaccurate, which can                how AI can wind up reinforcing our collective biases.
injure both individuals and society. Poorly performing            What is going wrong? First, like all of us, algorithm
AI might underestimate a person’s fitness for a                   creators have cultural blind spots, which can cause
job or creditworthiness. AI could crash a car by                  them to miss opportunities to correct their algorithms’
misperceiving environmental conditions or misjudging              disparate impacts. Second, AI forms its predictions
what another vehicle will do. In short, AI could harm             from data sets programmers or users provide. As a
individuals in all the ways that humans and their                 result, AI predictions are only as good as the data the
creations already do (and probably some novel                     AI is trained upon. Training data, in turn, reflect the
ways too).                                                        society from which they are collected, biases included.
                                                                  To prevent societal biases from infecting AI predictions,
Policymakers may leave redress to the courts,7 but
                                                                  developers and operators—and their attorneys and
there may be gaps in existing law that legislators decide
                                                                  other advisors—must recognize them and determine
to fill with new causes of action. Governments also may
                                                                  how to adjust the training data.
turn to regulation as a prophylactic to supplement the
tort system, as many countries have done in areas such            Even when AI makes accurate and unbiased
as food and consumer product safety.                              predictions, however, the results can be troubling.
                                                                  Several years ago, researchers found that Facebook
The pressure may be even greater to regulate AI
                                                                  was less likely to display ads for science, technology,
applications that might cause societal harm. For
                                                                  engineering, and math jobs to women. Women
instance, AI can discriminate against members of
                                                                  were interested in the jobs, and the employers were
historically disadvantaged groups. Imagine a human
                                                                  interested in hiring women. But “the workings of the ad
resources AI application trained to identify the best job
                                                                  market discriminated. Because younger women are
candidates by finding those most similar to previous
                                                                  valuable as a demographic on Facebook, showing ads
hires. Free from the implicit biases we all carry, it should
                                                                  to them is more expensive. So, when you place an ad
be completely objective in selecting the best candidates
                                                                  on Facebook, the algorithms naturally place ads where
for that company, right? But imagine further that the
                                                                  their return per placement is highest. If men and women
applicant pool whose resumes comprised the training
                                                                  are equally likely to click on STEM job ads, then it is
set was predominantly male. Actually, you don’t have
                                                                  better to place ads where they are cheap: with men.”11
to imagine. Using this method, Amazon’s Edinburgh
office developed a machine learning system for hiring             Power
decisions that “effectively taught itself that male               In a 2018 MIT-Harvard class on The Ethics and
candidates were preferable.”8                                     Governance of Artificial Intelligence, Joi Ito relates
Facial recognition technology also raises social-                 being told that “machines will be able to win at any
justice concerns. A study of 189 facial recognition               game against humans pretty soon.” Ito then observes,
systems found minorities were falsely named much                  “A lot of things are games. Markets are like games.
more frequently than whites and women more often                  Voting can be like games. War can be like games. So,
than men.9 Privacy questions aside, using facial                  if you could imagine a tool that could win at any game,
recognition to identify criminal suspects makes these             who controlled it and how it is controlled has a lot of
racial differences particularly troubling because falsely         bearing on where the world goes.”12 It is easy to see
identified individuals may be surveilled, searched or             why the public might demand regulation of this power.
even arrested.10

                                                        AI Regulation | 2
AI REGULATION: WHAT YOU NEED TO KNOW TO STAY AHEAD OF THE CURVE - BY PETER J. SCHILDKRAUT
Market Failures                                                       and consistent with business necessity.”14 There is
                                                                      no carve-out for AI.
For all its power, though, AI cannot transcend market
forces and market failures (even if it might be able             ●    Likewise, the US Equal Credit Opportunity Act
to win market “games”). There will be cases when AI                   (ECOA),15 which also does not mention AI, “prohibits
performs accurately in a socially desirable arena, yet a              credit discrimination on the basis of race, color,
market outcome may not be desirable.                                  religion, national origin, sex, marital status, age, or
                                                                      because a person receives public assistance. If, for
Consider self-driving cars. Aside from relieving drivers
                                                                      example, a company made credit decisions [using
of the drudgery of daily commutes, freeing time for more
                                                                      AI] based on consumers’ Zip Codes, resulting in a
pleasant or productive activity, a major selling point for
                                                                      ‘disparate impact’ on particular ethnic groups, …
vehicle autonomy is safety. The underlying AI won’t get
                                                                      that practice [could be challenged] under ECOA.”16
tired or distracted or suffer from other human frailties
that cause accidents. But how should an autonomous               ●    Antidiscrimination regimes under US state law17
vehicle be programmed to pass cyclists in the face of                 and in other countries18 similarly apply to AI.
oncoming traffic? The vehicle’s occupants will be safer          ●    The US Fair Credit Reporting Act (FCRA) requires
if the vehicle travels closer to the cyclist and further              certain disclosures to potential employees,
from oncoming traffic. The cyclist will be safer if the car           tenants, borrowers, and others regarding credit
moves closer to the oncoming traffic and further from                 or background checks and further disclosures if
the cyclist. Nobody wants to buy an autonomous vehicle                the report will lead to an adverse action.19 Credit
programmed to protect others at its occupants’ peril, but             and background checks that rely on AI are just as
everyone wants other people’s autonomous vehicles                     regulated as those that don’t. And, to comply with
to be programmed to minimize total traffic casualties.13              FCRA (or to defend against an ECOA disparate-
This is a classic collective action problem in which                  impact challenge), a company using AI in covered
regulation can improve the market outcome.                            decisions may need to explain what data the AI
                                                                      model considered and how the AI model used the
How Is AI Regulated?—Application                                      data to arrive at its output.20
of Familiar Regulatory Regimes                                   ●    The US Securities and Exchange Commission
Widespread regulation is coming because AI sometimes                  has enforced the Investment Advisers Act of
produces inaccurate or biased predictions, can have                   194021 against so-called “robo-advisers,” which
great power when accurate and remains vulnerable                      offer automated portfolio management services.
to market failures. (I use “regulation” expansively to                In twin 2018 proceedings, the SEC found that
include statutory constraints enforced through litigation,            robo-advisers had made false statements about
not just agency-centered processes.) What will this                   investment products and published misleading
regulation look like?                                                 advertising.22 The agency also has warned robo-
Some regulation of AI will look very familiar, as AI already          advisers to consider their compliance with the
is regulated in certain economic sectors and activities.              Investment Company Act of 194023 and Rule 3a-424
                                                                      under that statute.25
Generally Applicable Law                                         ●    There should be little doubt that the US Food
“AI did it” is, by and large, not an affirmative defense.             & Drug Administration will enforce its good
If something is unlawful for a human or non-AI                        manufacturing practices regulations26 on AI-
technology, it probably is illegal for AI. For instance:              controlled pharmaceutical production processes.
●   Title VII of the US Civil Rights Act of 1964 (as             ●    And, of course, claims about AI applications
    amended) prohibits employment practices with “a                   must not deceive, lest they run afoul of Section
    disparate impact on the basis of race, color, religion,           5 of the US Federal Trade Commission Act,27
    sex, or national origin” unless “the challenged                   state consumer protection statutes and similar
    practice is job related for the position in question              laws in other countries. Indeed, one of the FTC’s
                                                                      commissioners wants to crack down on “marketers
                                                                      of algorithm-based products or services [that]
       EVEN WHEN AI MAKES                                             represent that they can use the technology
       ACCURATE AND UNBIASED                                          in unsubstantiated ways” under the agency’s
                                                                      “deception authority.”28
       PREDICTIONS, HOWEVER, THE
       RESULTS CAN BE TROUBLING.

                                                       AI Regulation | 3
US states, too, are adopting privacy and other laws
         WIDESPREAD REGULATION                                  expressly regulating AI.
         IS COMING BECAUSE AI                                   ●    As amended by the California Privacy Rights Act of
         SOMETIMES PRODUCES                                          2020, the California Consumer Privacy Act of 2018
                                                                     contains requirements like the GDPR’s (although
         INACCURATE OR BIASED                                        the restrictions on automatic decision-making are
         PREDICTIONS, CAN HAVE GREAT                                 left to be fleshed out by regulations).37
         POWER WHEN ACCURATE AND                                ●    The new Virginia privacy statute does as well.38
         REMAINS VULNERABLE TO                                  ●    Another California law targets intentionally
                                                                     deceitful use of chatbots masquerading as real
         MARKET FAILURES.                                            people in certain commercial transactions or to
                                                                     influence voting.39
The longer broad regulation takes to arrive, the more           ●    Depending on the technology involved, the Illinois
we should expect government enforcers to apply their
                                                                     Biometric Information Privacy Act may regulate
existing powers in new ways. Just as the FTC has
                                                                     private entities’ use of facial recognition technology.40
used its Section 5 authority in the absence of a federal
privacy statute,29 that agency is turning its attention to      ●    Illinois also has the Artificial Intelligence Video
AI abuses.30                                                         Interview Act, under which employers must notify
                                                                     job applicants when they use AI to vet video
Express Regulation of AI                                             interviews, explain how the AI evaluates applicants,
In the EU and the United Kingdom, the General Data                   and obtain applicants’ consent to AI screening.41
Protection Regulation, which reaches far beyond AI,
restricts the development and use of AI in connection           How Will AI Be Regulated?—
with individuals and their personal data.31 For instance,       Development of New Regulatory
Article 6 specifies when and how “personal data” may
be “processed,”32 which encompasses pretty much                 Regimes
any way one might use data with AI.33 Articles 13-              Because existing rules don’t address all concerns
14 require “controllers” to provide clear and simple            about AI, policymakers worldwide are considering the
explanations about using personal data in “profiling” or        creation of new regulatory regimes. There appears to
other automated decision-making, including discussions          be a loose consensus among at least the advanced
of the AI’s logic and the significance and anticipated          democracies that the degree of regulation should
consequences of the AI output for the individuals.34            be tied to an AI application’s risk. For instance, a
Article 22 establishes an individual’s right not to be          music-recommendation algorithm has much lower
subject to fully automated decisions with significant           stakes than AI used for screening job candidates or
effects on that person. To waive that right, the individual     loan applicants, diagnosing disease, or operating an
must be able to have a human hear one’s appeal of               autonomous vehicle.42 The Organization for Economic
the automated decision.35 And Article 16’s right to             Cooperation and Development (OECD) and the G20
rectification of incorrect data and Article 17’s right to be    have adopted principles embodying this high-level
forgotten may require retraining or even deleting an AI         consensus.43 (See box.)
model that incorporates personal data by design.36

           OECD/G20 Principles for Responsible Stewardship of Trustworthy AI
     ●   AI should benefit people and the planet by driving         ●     There should be transparency and responsible
         inclusive growth, sustainable development and                    disclosure around AI systems to ensure that
         well-being.                                                      people understand AI-based outcomes and can
     ●   AI systems should be designed in a way that respects             challenge them.
         the rule of law, human rights, democratic values,          ●     AI systems must function in a robust, secure and safe
         and diversity, and they should include appropriate               way throughout their lifecycles, and potential risks
         safeguards—for example, enabling human                           should be continually assessed and managed.
         intervention where necessary—to ensure a fair and          ●     Organizations and individuals developing,
         just society.                                                    deploying or operating AI systems should be held
                                                                          accountable for their proper functioning in line with
                                                                          the above principles.

                                                      AI Regulation | 4
in AI application outcomes (both absolutely and
        THERE APPEARS TO BE A LOOSE                                    compared to existing processes), and consider what
        CONSENSUS AMONG AT LEAST                                       constitutes adequate disclosure and transparency
                                                                       about using AI.51
        THE ADVANCED DEMOCRACIES                                  The US Department of Transportation illustrated
        THAT THE DEGREE OF                                        the Trump Administration’s approach. As US DOT
        REGULATION SHOULD BE TIED                                 grappled with the challenges of autonomous vehicles,
                                                                  it prioritized voluntary, consensus-based technical
        TO AN AI APPLICATION’S RISK.                              standards, which can evolve more easily in tandem
                                                                  with technology than regulations.52 However, when
But the international consensus seems likely to fray—at           standards will not suffice and regulation is necessary,
least somewhat—in deciding the degree of regulation               “US DOT will seek rules that are as nonprescriptive and
warranted by particular risks. The United States has a            performance-based as possible,” focusing on outcomes
culture of permissionless innovation, waiting for harms           instead of specifying features, designs or other inputs.53
to emerge and be well-understood before regulating.               What Will Happen Under the Biden
European governments tend to be quicker to invoke the             Administration?
precautionary principle: innovations should be regulated
until they are proven safe. Yet, even in the United               It remains to be seen whether the Biden Administration
States, some warn against repeating what they see as              will have an equally light touch. In recent decades,
the mistake of not regulating the internet until it was too       Democrats (like Republicans) largely have supported
late to prevent various harms.44                                  permissionless innovation to encourage new
                                                                  technologies. For example, when home satellite
US Steps Toward Regulating AI                                     television services were launching, both parties
The Trump Administration                                          generally agreed to exempt them from many regulatory
                                                                  burdens borne by their established cable television
The Trump Administration sought to “promote a light-              competitors. Similarly, Democrats and Republicans
touch regulatory approach” to AI.45 Last year, OMB                alike mostly took a laissez-faire approach to the
published guidance on regulating AI consistent with this          internet for the first quarter century of the commercial
light-touch approach.46 According to OMB:                         World Wide Web.
●   “The appropriate regulatory or non-regulatory                 However, members of both parties have begun
    response to privacy and other risks must                      pressing for greater regulation of at least some
    necessarily depend on the nature of the risk                  internet companies and applications, arguing they
    presented and the tools available to mitigate                 have amassed too much economic, social and political
    those risks.”47 In particular, “[i]t is not necessary to      power. Shaped in part by this experience, and spurred
    mitigate every foreseeable risk. … [A] risk-based             by concerns that AI and other algorithms are—under a
    approach should be used to determine which                    cloak of technological objectivity—reinforcing structural
    risks are acceptable and which risks present the              biases in US society, various Democrats have called
    possibility of unacceptable harm, or harm that has            for regulation of AI even at this early stage. For
    expected costs greater than expected benefits.”48             example, US Senators Cory Booker and Ron Wyden
    This comparison should consider “whether                      and US Representative Yvette Clarke have introduced
    implementing AI will change the type of errors                the Algorithmic Accountability Act.54 According to its
    created … and … the degree of risk tolerated in               sponsors, this legislation would:
    other existing systems.”49
                                                                  ●    Authorize FTC regulations requiring companies
●   Instead of “[r]igid, design-based regulations …                    under its jurisdiction to conduct impact assessments
    prescrib[ing] the technical specifications of AI                   of highly sensitive automated decision systems.
    applications,” agencies should consider sector-                    This requirement would apply both to new and
    specific policy guidance or frameworks, pilot                      existing systems.
    programs and experiments to foster creative
    prophylactic approaches and to learn what works,
                                                                  ●    Require companies to assess their use of
    and the use of voluntary consensus standards                       automated decision systems, including training
    and frameworks.50                                                  data, for impacts on accuracy, fairness, bias,
                                                                       discrimination, privacy[,] and security.
●   Agencies should evaluate training data quality,
    assess protection of privacy and cybersecurity,
                                                                  ●    Compel companies to evaluate how their
    promote nondiscrimination and general fairness                     information systems protect the privacy and security
                                                                       of consumers’ personal information.
                                                        AI Regulation | 5
●       Mandate correction of problems companies discover
        during the impact assessments.55                                         TO SUPPLEMENT THE GDPR,
The Consumer Online Privacy Rights Act introduced                                THE EU IS MOVING TOWARDS
by Senators Cantwell, Schatz, Klobuchar, and Markey
similarly would require algorithmic decision-making                              ADDITIONAL LEGISLATION
impact assessments.56 With an administration that is
less hostile to regulation, Democratic control of the
                                                                                 TO REGULATE HIGH-
Senate, and greater public awareness of and sensitivity                          RISK AI APPLICATIONS—
to systemic discrimination, such legislation may gain
traction and even become law.                                                    AND IS LIKELY TO BE
European Steps Toward Regulating AI                                              MORE PRESCRIPTIVE IN
The European Commission                                                          REGULATING THAN THE
While the United States has yet to embrace broad                                 UNITED STATES WILL BE.
regulation of AI, Europe is plowing ahead. To
supplement the GDPR, the EU is moving towards
additional legislation to regulate high-risk AI                        compliance through conformity assessments before
applications—and is likely to be more prescriptive in                  introduction into the European market. Some AI
regulating than the United States will be.                             systems—high-risk or not—would have to meet
                                                                       transparency standards. Certain uses of AI would be
In April 2021, the European Commission released                        prohibited altogether. Most current uses of AI would
proposed legislation regulating AI.57 (The Commission                  remain unregulated. (See box summarizing selected
simultaneously proposed updated legislation regulating                 proposals.)59 In the transportation sector, high-risk AI
machinery, which, among other changes, would                           safety components, products or systems covered by
address the integration of AI into machinery, consistent               eight existing legislative acts would be exempt from
with the AI proposal.)58 The proposed AI legislation                   the proposal although those acts would be amended to
classifies AI systems as high-risk or not based                        take the proposal’s requirements for high-risk systems
upon intended use. If adopted, the legislation would                   into account.60 Interested parties may submit comments
require all high-risk AI systems to meet standards for                 during the consultation period.61 The European
compliance programs and risk management; human                         Parliament and Council of the European Union then
oversight; documentation, disclosure and explainability;               will consider the Commission’s proposal in light of
robustness, accuracy and cybersecurity; and record                     those comments.
retention. High-risk systems would have to demonstrate

                                  Selected European Commission Proposals
                                                          Prohibited Uses
    ●    Harmful distortion of human            ●   Real-time remote biometric            ●   Governmental social scoring of
         behavior through subliminal                identification systems in public          individuals leading to discriminatory
         techniques or exploitation of age          places for law enforcement                treatment across contexts or
         or disability.                             (limited exceptions).                     disproportionate to behavior.

                                                          High-Risk Uses
    ●    Many types of safety components        ●   Remote biometric identification and   ●   Judicial decision-making.
         and products.                              categorization of people.             ●   Recruitment and other
    ●    Public assistance determinations.      ●   Evaluation of creditworthiness            employment decisions.
    ●    Law enforcement use for individual         and credit scoring (limited           ●   Other uses the Commission later
         risk assessments, credibility              exception).                               designates as high-risk.
         determinations, emotion detection,     ●   Immigration determinations.
         identification of “deep fakes,”        ●   Admission, assignment and
         evidentiary reliability evaluations,       assessment of students.
         predictive policing, profiling of
         individuals, and crime analytics.      ●   Emergency services
                                                    dispatch.

                                                             AI Regulation | 6
Selected European Commission Proposals Cont.

                                     Requirements for High-Risk AI Systems
Compliance (Providers)                                                  ◦    Human oversight and related technical
●   Quality management system (compliance program),                          safety measures.
    including regularly updated prescribed risk                         ◦    Expected lifetime and necessary maintenance
    management system reflecting state of the art.                           and care.
●   Pre-market conformity assessment (certifications valid         ●    Detailed prescription of continuously updated technical
    for up to five years absent substantial modifications)              documentation covering (in part):
    and post-market monitoring with immediate correction
    and reporting requirements upon reason to consider                  ◦    Descriptions of system and development process,
    system noncompliant or risky to health, safety or                        including compliance trade-offs.
    fundamental rights.                                                 ◦    Monitoring, functioning and control, including
●   Registration in EU database.                                             system’s risks and mitigation.
                                                                        ◦    Provider’s risk management system.
Compliance (Others)
●   Third party that sells or installs under own brand, alters     Robustness, Accuracy and Cybersecurity—High-risk
    intended purpose, substantially modifies system, or            AI systems must:
    incorporates system into product treated as provider.          ●    Perform consistently at appropriate levels throughout
●   Importers and distributors, among other obligations,                their lifecycles, notwithstanding attempts at
    must verify upstream compliance, not degrade                        manipulation of training or operation or
    compliance and report if system risky to health, safety             unauthorized alteration.
    or fundamental rights; distributors have correction duty       ●    Meet training, validation and testing data
    upon reason to consider system noncompliant.                        quality requirements.
●   Professional users must operate consistent with                Penalties for Violations—up to greater of:
    provider instructions, monitor operations, input only
                                                                   ●    €30 million.
    relevant data, and assess data protection impact.
                                                                   ●    Six percent of global annual revenue.
Human Oversight—Humans with necessary competence,
training and authority must oversee operation and be able to:      Retention of Records and Data
●   Stop operation.                                                ●    Automatic logging of operations, ensuring traceability.
●   Disregard, override or reverse the output.                     ●    Ten years:

Documentation, Disclosure and Explainability—To                         ◦    Technical documentation.
enable users to understand and control operation and to                 ◦    Documentation of quality management system.
facilitate governmental oversight, providers must supply:
                                                                        ◦    Certain conformity records.
●   Concise, complete, correct, clear, relevant, accessible,
    and comprehensible instructions describing:
    ◦   Characteristics, capabilities and limitations of
        performance, including foreseeable unintended
        outcomes and other risks.

                                      Requirements for Certain AI Systems
Transparency—For high-risk and low-risk systems,                   ●    Individuals exposed to emotion recognition or (unless
if applicable:                                                          for law enforcement) biometric categorization system
● System must inform people they are interacting with                   must be notified.
     an AI system unless obvious.                                  ●    “Deep fakes” must be identified (qualified law
                                                                        enforcement and expressive, artistic and scientific
                                                                        freedom exceptions).

                                                         AI Regulation | 7
The European Parliament
Previously, the European Parliament had adopted
its own recommendations for proposed legislation.62                     BUILDING ETHICAL AI PRINCIPLES
Parliament would not ban specific AI practices as                       INTO THE DEVELOPMENT,
the Commission proposed.63 On the other hand,                           PROCUREMENT AND USE OF AI
Parliament defined “high risk” somewhat more
expansively,64 and it suggested requiring high-risk                     NOW CAN REDUCE THE CHANCE
AI not to “interfere in elections or contribute to the                  OF HAVING TO SCRAP YOUR
dissemination of disinformation” or cause certain
other social harms.65 Parliament also proposed
                                                                        COMPANY’S EFFORTS AND
that all conformity assessments be performed by                         START OVER WHEN FUTURE
approved third parties while the Commission would                       REGULATIONS COME INTO FORCE.
allow providers of certain types of high-risk AI to
assess themselves.66 Moreover, Parliament included            formed the Centre for Data Ethics and Innovation (CDEI)
an individual right to redress for violations and             to “investigate and advise on how we govern the use
whistleblower protections,67 which the Commission did         of data and data-enabled technologies, including
not. Any of these proposals could find their way into         Artificial Intelligence.”76 CDEI’s mandate continues to be
the final legislation that ultimately is adopted.             somewhat fluid, and the UK government promised—but
Parliament also recommended separate legislation              has yet to propose—legislation to establish the agency
amending the civil liability regime for AI systems.68 An      more formally.77 Nevertheless, it seems CDEI will not
earlier Commission report had indicated such changes          itself be an AI regulator. That task will remain with
might be necessary,69 so its proposed legislation may         the ICO and sector-specific agencies. In its advisory
be forthcoming.                                               capacity, CDEI has stated “[a]t this stage,” it does not
For enforcement and other AI governance tasks, both           perceive “a need for a new specialized regulator or
the Commission and Parliament would look to a mix             primary legislation,” at least not “to address algorithmic
of agencies. Sectoral authorities (e.g., medical device       bias.”78 However, CDEI is calling for clarification of
regulators) at both the EU and member state levels            how various statutes and regulations apply to
would continue to implement their mandates.70 New             algorithmic bias.79
national authorities would fill gaps, coordinated by a
new European Artificial Intelligence Board.71 Parliament
                                                              How Can You Keep Your Company
also suggests new national supervisory authorities to         Ahead of the Curve?
be coordinated by the Commission or other Union-              Generally applicable regulations cover use of AI in
level bodies.72 How this mix of authorities gels (or          already-regulated activities. The United States, the
not) will significantly influence how burdensome the          United Kingdom, the European Union, and other
contemplated regulatory regime becomes.                       jurisdictions80 are—or may be—moving toward broader
Whatever legislation ultimately emerges from the              regulation of AI. We are beginning to see, albeit less
European Union’s deliberations, exporters of AI-enabled       clearly in some places, what will emerge from their
products and services into the EU should expect to be         deliberations. While many uncertainties remain, it is
covered.73 The Commission’s proposed legislation              possible—indeed, imperative—for your company to
would even reach providers and professional users of          get ahead of the curve.
AI systems outside the EU if the system’s output is used      Your company’s employees probably haven’t
inside the EU.74 US and other non-European companies          contemplated most of the issues surrounding AI
need to monitor the progress of this legislation and plan     regulation. The trade association CompTIA found
for compliance.                                               that only 14 percent of IT and business professionals
The United Kingdom                                            associate “ethical problem” with AI.81 If your personnel
                                                              are similar, they simply aren’t considering the legal
Increasing the complexity of the task, the post-Brexit
                                                              and reputational risks from AI.
UK may be forging its own path on AI regulation. The
Information Commissioner’s Office (ICO, the UK’s data         These risks require attention, however. Working from
protection agency) advises developers and users of AI on      probabilities, not certainty, your company’s AI will
their obligations under the GDPR and other legislation.       make mistakes. A big enough error will attract scrutiny
When warranted, the ICO can impose significant                from regulators, the media and the plaintiffs’ bar. By
monetary penalties for violations.75 Moreover, the UK         being proactive, you can help your company avoid
Department for Digital, Culture, Media & Sport (DCMS)         unnecessary risk.

                                                    AI Regulation | 8
continuously identifying requirements, evaluating
        MITIGATING AI                                           solutions and ensuring improved outcomes throughout
        RISK INVOLVES                                           the AI system’s lifecycle, and involving stakeholders
                                                                therein.”87 One leading list, the European Commission’s
        MANY DIMENSIONS.                                        High-Level Expert Group on Artificial Intelligence’s The
        EXPLAINABILITY IS A                                     Assessment List for Trustworthy AI (ALTAI), covers a
                                                                broad range of topics.88 Organizations like the IEEE
        GOOD PLACE TO START.                                    are developing standards for AI that will inform future
                                                                assessment lists.89 Of course, any list must be adapted
Moreover, retrofitting regulatory compliance may be             for your company generally and specifically for each AI
difficult, if not impossible. For example, some types of        system your company develops, procures or uses.
AI incorporate their training data into the model itself.       Before proceeding deeply into an assessment, though,
However, the GDPR permits people to withdraw their              you should consider what harms can result from the use
consent to continuing storage of their data.82 If your          (or misuse) of the AI. If the worst harm is trivial (recall
company is subject to the GDPR (or another law with             the music-recommendation algorithm), your company
a similar provision) and training data are incorporated         may want to conduct a thorough review to improve
into your AI model, your company’s designers should             its product. But, from a compliance perspective,
make excising data from the model upon request as               an extensive assessment would waste resources.
easy as possible.83 Building ethical AI principles into         Conversely, where the AI might harm human health,
the development, procurement and use of AI now can              safety or welfare, careful risk assessment and mitigation
reduce the chance of having to scrap your company’s             become a prudent investment.
efforts and start over when future regulations come
into force.                                                     Mitigation Through Explanation
So, where do you begin?                                         Mitigating AI risk involves many dimensions.
                                                                Explainability is a good place to start. Explaining an
Risk Assessment and Mitigation                                  adverse decision enables an effective “appeal” if an
To start, you should audit your company’s AI projects           AI prediction doesn’t make sense or acceptance of
for compliance with applicable privacy laws like the            the outcome if it does. Either way, the affected party
GDPR and CPRA, if you aren’t doing so already.                  will be less inclined to complain to a regulator.90 In
Machine learning relies upon copious amounts of data.           addition, regulators are likely to require explainability
And the collection, storage and processing of personal          in various instances.91
data often implicates these statutes. For instance, the         But not all AI predictions can be explained so that
GDPR requires a data protection impact assessment if            humans can connect the dots. If a machine-learning
your company’s AI systems process personal data for             algorithm can predict cancer more accurately than a
decisions with significant effects on individuals.84            radiologist through data patterns that are imperceptible
Next, you ought to make AI risk assessment an                   to humans, I’ll choose the more accurate prediction over
ongoing part of your company’s development,                     the explainable one.92 Even when the AI’s output eludes
procurement and use of AI. From a 30,000-foot view,             human understanding, though, your company probably
the US National Institute of Standards and Technology           can provide other types of explanation instead. The ICO
(NIST) suggests considering characteristics including           and The Alan Turing Institute identify six main varieties:
“accuracy, reliability, resiliency, objectivity, security,      ●    Rationale explanation: “[A]n accessible and
explainability, safety, and accountability.”85 In one of             nontechnical” version of the reasons behind
many other variants, the Institute of Electrical and                 a decision.
Electronics Engineers (IEEE) offers eight general
principles for ethically aligned design of what it terms
                                                                ●    Responsibility explanation: Who developed,
“autonomous and intelligent systems”: human rights,                  managed and operated the AI system and how
well-being, “data agency” (i.e., control of one’s own                to obtain a human review of a decision.
data), effectiveness, transparency, accountability,             ●    Data explanation: What data went into the
awareness of misuse, and competence.86                               model and how they were used to make a
Assessment                                                           particular decision.

As a practical matter, you’ll want a checklist to explore
                                                                ●    Fairness explanation: What design and
the relevant issues, but “[k]eep in mind that [any]                  implementation processes ensure the AI’s decisions
assessment list will never be exhaustive. Ensuring                   “are generally unbiased and fair” and that a specific
trustworthy AI is not about ticking boxes, but about                 “individual has been treated equitably.”

                                                      AI Regulation | 9
●   Safety and performance explanation: How                      Finally, if your company outsources the design or
    designers and operators “maximise[d] the accuracy,           development of an AI system, that may not relieve it
    reliability, security[,] and robustness” of the AI           from liability for assessing and mitigating risk.100 In
    system’s decisions and operations.                           procurement contracts, be sure to “include contractual
●   Impact explanation: The consideration and                    terms specifying what categories of information will be
    monitoring of the effects “an AI system and its              accessible to what categories of individuals [both inside
    decisions has or may have on an individual, and              your company and your customers] who may seek
    on wider society.”93                                         information about the design, operation, and results of
                                                                 the A/IS.”101 And be sure your vendor’s risk assessment
Each type of explanation will not be achievable for              and mitigation processes are as rigorous as your own.
every AI system, but any system should have at least
one available.                                                   Oversight
Mitigating Bias                                                  Process and Structure
Bias should be another focus for risk mitigation. 94
                                                                 The novelty, complexity and often opacity of AI
Training data should be free from bias (in both the              systems may require changing how your company
neutral statistical sense and the damaging human                 oversees risk assessment and mitigation. Regulators
sense), but that can be hard to achieve. Data will tend          are signaling they want senior management to pay
to reflect society’s biases. Unfortunately, in detecting         close attention to potentially risky AI applications.
and combating pernicious bias, “there is no simple               For example, the ICO recommends:
metric to measure fairness that a software engineer                   Your senior management should be responsible for
can apply. … Fairness is a human, not a mathematical,                 signing-off [on] the chosen approach to manage
determination, grounded in shared ethical beliefs.”95                 discrimination risk and be accountable for [AI’s]
Broad diversity of perspectives (both in lived                        compliance with data protection law. While they are
experience and professional training) on the teams                    able to leverage expertise from technology leads
developing an algorithm and overseeing a model’s                      and other internal or external subject matter experts,
training can eliminate blind spots in programming and                 to be accountable[,] your senior leaders still need
curating the training data. A Brookings Institution paper             to have a sufficient understanding of the limitations
suggests use of a “bias impact statement” to mitigate                 and advantages of the different approaches. This
harmful bias.96 And FTC Commissioner Slaughter                        is also true for [data protection officers] and senior
has warned, “As an enforcer, I will see self-testing                  staff in oversight functions, as they will be expected
[for unlawful credit discrimination] as a strong sign of              to provide ongoing advice and guidance on the
good-faith efforts at legal compliance, and I will see                appropriateness of any measures and safeguards
a lack of self-testing as indifference to alarming                    put in place to mitigate discrimination risk.102
credit disparities.”97
                                                                 Yet, the novelty and complexity of AI may mean that
Moreover, even if the data are representative and an             your company’s existing compliance structures are not
otherwise good fit for training an AI model at one point         well-suited to oversee the development, procurement
in time, society continues to evolve. This evolution can         and use of AI. If oversight responsibilities are spread
degrade the model’s performance over time (so-called             across different functions or teams, nobody may have a
“concept/model drift”). Your company should monitor              broad enough picture to ensure regulatory compliance.
potential drift in the AI systems it develops and uses,          Problems may be missed if they fall into the gaps
retraining its models on fresh data when necessary.98            between teams or if the responsibility is shared but
Other Considerations: Trade-Offs and Outsourcing                 imperfectly coordinated. Diffusion of responsibility may
Risk assessment and mitigation will involve trade-offs.          also mean diffusion of expertise in a new, complicated
Accuracy and explainability may clash (as in the black-          and rapidly evolving area. You need to ask whether the
box radiology/cancer prediction example). Accuracy               current compliance structure can assure that senior
and fairness may be in tension (hypothetically, for              management has the information needed for the
instance, predictions that consider a person’s race              accountability that regulators expect.
or gender might be more accurate, at least in some               You (and your board) also need to consider whether the
sense, but might also be unfair). Different aspects of           board has duties to monitor AI regulatory compliance103
fairness may conflict. When no acceptable trade-off              and, if so, whether your board has sufficient expertise to
can be found, should your company still release or               navigate any major—and certainly any mission-critical—
employ the application?99                                        risks presented by AI.

                                                       AI Regulation | 10
Explainability for Oversight                                  In addition, the ICO explains that the GDPR requires
Having the broadest possible set of explanations for          users of AI involving personal data “to keep a record
each system will facilitate oversight. Even more than         of all decisions made by an AI system … [,] includ[ing]
with risk assessment and mitigation, the explainability       whether an individual requested human intervention,
of AI models is important to addressing the oversight         expressed any views, contested the decision, and
challenges they pose. If the developers, purchasers           whether you changed the decision as a result.”108 The
and users of an AI system can explain its operations          ICO also urges AI users to analyze this information for
or why they are comfortable with its predictions, you         potential problems.109
and others performing oversight will have greater             Similar documentation mandates should be
confidence in the model’s accuracy and its compliance         anticipated in other jurisdictions and from standards-
with legal requirements.104                                   setting organizations. For instance, EU country
When asking for explanations, be sure to inquire into         data protection agencies are likely to interpret the
their bases. The system’s operators may not have              GDPR as the ICO has. Moreover, for high-risk AI
trained or developed the model. Furthermore, “few,            applications, the European Commission would
if any, AI systems are built from the ground up, using        mandate provider retention for ten years of extensive
components and code that are wholly the creation of the       technical documentation of the system’s development
AI developers themselves.”105 An AI system may rely on        process, design, training, testing, validation, and
a mix of open-source and proprietary components and           risk management system as well as the provider’s
code. The proprietary elements may blend customized           compliance program.110 The Commission also proposed
and commercial off-the-shelf modules. The customized          that providers and users retain logs of such systems’
portions may have been produced inside your company           operations to ensure traceability.111 Beyond written
or by vendors. In short, you may need to keep “peeling        policies for the operation of autonomous and intelligent
the onion” to arrive at well-substantiated explanations       systems,112 the IEEE proposes:
for internal or public consumption.                                      Engineers should be required to thoroughly
                                                                         document the end product and related data
Documentation                                                            flows, performance, limitations, and risks of A/
Regulation of AI also should spur reassessment of                        IS. Behaviors and practices that have been
your company’s document-retention policies. For one                      prominent in the engineering processes should
thing, regulators will demand documentation about                        also be explicitly presented, as well as empirical
the development and use of certain AI systems. For                       evidence of compliance and methodology
another, records likely will be needed to defend against                 used, such as training data used in predictive
liability for harms allegedly caused by AI.                              systems, algorithms and components used, and
                                                                         results of behavior monitoring. Criteria for such
Requirements of Regulations and Standards
                                                                         documentation could be: auditability, accessibility,
The ICO has been particularly detailed in describing                     meaningfulness, and readability.113
the GDPR’s documentation requirements for users
                                                              Defensive Document Retention
of AI involving personal data. The ICO advises such
users to record their risk assessment and mitigation          Even if a regulator or standard does not compel
decisions (especially regarding trade-offs among risks),      documentation, your company may want to record
lines of accountability for decisions about trade-offs,       how its AI systems were developed, trained and
and outcomes of these decisions “to an auditable              used. Properly designed, trained and functioning
standard.”106 The ICO goes on to recommend that these         AI systems will make mistakes. Their outputs come
records include:                                              with a specified probability of being correct—not a
                                                              certainty—which means they also have a specified
●   Reviews of the risks to the individuals whose
                                                              probability of being incorrect.
    personal data is processed.
                                                              If the stakes of an error are high enough, sooner
●   How trade-offs among risks and values were
                                                              or later, your company should expect a regulatory
    identified and weighed.
                                                              investigation, a lawsuit or both. The harm from the
●   The rationale for choosing among technical                error will be obvious. Whether the res ipsa loquitur
    approaches (if applicable).                               doctrine114 technically applies or not, the novelty,
●   Which factors were prioritized and the reasons for        complexity and opacity of AI systems raise the risk
    the final decision.                                       the factfinder will infer your company’s liability from
                                                              the obvious harm itself. Even more than for familiar,
●   “[H]ow the final decision fits within your overall
                                                              less complicated, more explainable technologies,
    risk appetite.”107

                                                    AI Regulation | 11
your company will need to produce evidence of its due                           Conclusion
care in developing (or procuring), training and using
the algorithm. Similarly, if your company’s AI system                           The arrival of AI (and its regulation) means your work is
leads to prohibited disparate-impact discrimination,                            cut out for you. The legal landscape is changing rapidly
your company will want to demonstrate its good-faith                            and requires monitoring; yet, the direction in which
efforts to prevent this outcome. Document-retention                             we are heading is clear enough to define the tasks at
policies must balance these risks against the problems                          hand. Careful risk assessments, mitigation, attention to
of increased preservation.                                                      oversight, and documentation will help your company
                                                                                stay ahead of the curve.

Endnotes
1. Darrel Pae, Katerina Kostaridi and Elliot S. Rosenwald provided research assistance.
2. Relatively unfamiliar to a Western audience, Go is a territorial game that has been played for thousands of years. Although the “rules are
   quite simple,” the “strategic and tactical possibilities of the game are endless,” which makes Go an “extraordinary” “intellectual challenge.”
   The International Go Federation, About Go (July 3, 2010), https://www.intergofed.org/about-go/about-go.html.
3. Sundar Pichai, Why Google Thinks We Need to Regulate AI, Fin. Times (Jan. 19, 2020), https://www.ft.com/content/3467659a-386d-11ea-
   ac3c-f68c10993b04.
4. See, e.g., Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014); Max Tegmark, Life 3.0: Being Human in the Age of Artificial
   Intelligence (2017); Future of Life Inst., Benefits & Risks of Artificial Intelligence, https://futureoflife.org/background/benefits-risks-of-artificial-
   intelligence/ (last visited Mar. 3, 2021).
5. The Committee on Standards in Public Life, Artificial Intelligence and Public Standards § 1.1, at 12 (2020), https://assets.publishing.
   service.gov.uk/government/uploads/system/uploads/attachment_data/file/868284/Web_Version_AI_and_Public_Standards.PDF (Artificial
   Intelligence and Public Standards).
6. CompTIA, Emerging Business Opportunities in AI 2 (May 2019), https://www.comptia.org/content/research/emerging-business-
   opportunities-in-ai (Emerging Business Opportunities in AI).
7. See, e.g., Holbrook v. Prodomax Automation Ltd., No. 1:17-cv-219, 2020 WL 6498908 (W.D. Mich. Nov. 5, 2020) (denying summary
   judgment in suit for wrongful death of manufacturing plant worker killed by robot); Batchelar v. Interactive Brokers, LLC, 422 F. Supp. 3d
   502 (D. Conn. 2019) (holding that broker-dealer owed a duty of care in design and use of algorithm for automatically liquidating customer’s
   positions upon determination of margin deficiency in brokerage account); Nilsson v. Gen. Motors LLC, No. 4:18-cv-00471-JSW (N.D. Cal.
   dismissed June 26, 2018) (settled claim that car in self-driving mode negligently changed lanes, striking motorcyclist).
8. Maya Oppenheim, Amazon Scraps “Sexist AI” Recruitment Tool, Independent (Oct. 11, 2018), https://www.independent.co.uk/life-style/
   gadgets-and-tech/amazon-ai-sexist-recruitment-tool-algorithm-a8579161.html.
9. Nat’l Inst. of Standards & Tech., Press Release, NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software (Dec. 19,
   2019), https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software; Patrick Grother
   et al., Nat’l Inst. of Standards & Tech., Interagency or Internal Report 8280, Face Recognition Vendor Test (FRVT): Part 3: Demographic
   Effects (2019), https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.
10. See Kashmir Hill, Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match, NY Times (updated Jan. 6, 2021), https://www.
    nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html (reporting that the three people known to have been falsely
    arrested due to incorrect facial recognition matches are all Black men).
11. Ajay Agrawal et al., Prediction Machines: The Simple Economics of Artificial Intelligence 196 (2018) (citing Anja Lambrecht and Catherine
    Tucker, Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads (paper
    presented at NBER Summer Institute, July 2017)).
12. Joi Ito, Opening Event Part 1, https://www.media.mit.edu/courses/the-ethics-and-governance-of-artificial-intelligence/ (0:09:13-0:09:52).
13. See Jean-François Bonnefon et al., The Social Dilemma of Autonomous Vehicles, 352 Science 1573 (2016); Joi Ito & Jonathan Zittrain,
    Class 1: Autonomy, System Design, Agency, and Liability, https://www.media.mit.edu/courses/the-ethics-and-governance-of-artificial-
    intelligence/ (~0:12:15).
14. 42 USC § 2000e-2(k)(1)(A)(i).
15. 15 USC §§ 1691-1691f.
16. Andrew Smith, Bureau of Consumer Prot., FTC, Using Artificial Intelligence and Algorithms, Business Blog (Apr. 8, 2020, 9:58 AM), https://
    www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms (Smith Business Blog Post). The FTC also
    might be able to use the unfairness prong of Section 5 of the FTC Act, 15 USC § 45(n), to attack algorithmic discrimination against protected
    classes. Rebecca Kelly Slaughter, Comm’r, FTC, Remarks at the UCLA School of Law: Algorithms and Economic Justice at 13-14 (Jan. 24,
    2020) (Slaughter Speech).
17. See, e.g., NY Dep’t of Fin. Servs., Ins. Circular Letter No. 1 (Jan. 18, 2019), https://www.dfs.ny.gov/industry_guidance/circular_letters/
    cl2019_01 (“[I]nsurers’ use of external data sources in underwriting has the strong potential to mask” prohibited bias.).

                                                                     AI Regulation | 12
You can also read