The technical perspective on ethics: An overview and critique

Page created by Darryl Baker
 
CONTINUE READING
The technical perspective on ethics: An overview and critique
The technical perspective on
ethics: An overview and critique
Momin M. Malik
Senior Data Scientist – AI Ethics, Mayo Clinic
Fellow, Institute in Critical Quantitative, Computational, & Mixed Methodologies
Instructor, University of Pennsylvania School of Social Policy & Practice

Tuesday, March 29, 2022
The Center for Digital Ethics & Policy Annual International Symposium ‘22
The technical perspective on ethics: An overview and critique
Center

                 Goals and outline
for
Digital
Ethics &
Policy

                      With what lens do “technical” people
Goals and
outline
                 •
                      approach ethics?
The technical
perspective

Where does

                      What does this lens involve?
this lens come
from, and how
do people
break out?
                 •
Summary and
conclusion       •    Where does this lens come from?
                      Where does it break down and how?
References

                 •

                 The technical view of ethics: An overview and critique   2 of 30   Slides: https://MominMalik.com/cdep2022.pdf
The technical perspective on ethics: An overview and critique
Center

                 Caveats
for
Digital
Ethics &
Policy

Goals and
outline
                 •    We should not take any statements at face value as evidence of
                      what the authors actually think: I myself frequently engage in
The technical
perspective
                      strategic framing
                      – Instead, we should take it as evidence of what sort of framings are
Where does
this lens come          deemed acceptable (and note that these phrasings are what passed
from, and how
do people
break out?
                        peer review)
                      – (One example I use, Corbett-Davies & Goel, does this explicitly, taking
Summary and
conclusion
                        a turn halfway through the paper from math towards the limits of
                        abstraction)
References
                 •    Some of the framing I identify are already out of vogue; certainly, I
                      raise issues when I am a reviewer
                 •    My own perspective: hybrid, but primarily technical
                 The technical view of ethics: An overview and critique   3 of 30   Slides: https://MominMalik.com/cdep2022.pdf
The technical perspective on ethics: An overview and critique
Center
for
Digital
Ethics &
Policy

Goals and
outline

                 The technical perspective
The technical
perspective

Where does
this lens come
from, and how
do people
break out?

Summary and
conclusion

References

                 The technical view of ethics: An overview and critique   4 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 The background
for
Digital
Ethics &
Policy

Goals and        •    Zemel et al., 2013: “Information systems are becoming increasingly reliant on
outline
                      statistical inference and learning to render all sorts of decisions, including the
The technical
perspective
                      setting of insurance rates, the allocation of police, the targeting of advertising, the
                      issuing of bank loans, the provision of health care, and the admission of students.”
Where does
this lens come   •    Feldman et al., 2015: “Today, algorithms are being used to make decisions both
from, and how
do people
break out?
                      large and small in almost all aspects of our lives, whether they involve mundane
                      tasks like recommendations for buying goods, predictions of credit rating prior to
Summary and
conclusion
                      approving a housing loan, or even life-altering decisions like sentencing guidelines
                      after conviction.”
References
                 •    Corbett-Davies & Goel, 2018: “In banking, criminal justice, medicine, and beyond,
                      consequential decisions are often informed by statistical risk assessments that
                      quantify the likely consequences of potential courses of action.”

                 The technical view of ethics: An overview and critique   5 of 30     Slides: https://MominMalik.com/cdep2022.pdf
Center

                 The problem
for
Digital
Ethics &
Policy

Goals and        •    2013: “This growing use of automated decision-making has sparked
outline
                      heated debate among philosophers, policy-makers, and lawyers. Critics
The technical         have voiced concerns with bias and discrimination in decision systems
perspective
                      that rely on statistical inference and learning.” [No citations]
Where does
this lens come
                 •    2015: “How do we know if these algorithms are biased, involve illegal
from, and how
do people
                      discrimination, or are unfair? These concerns have generated calls, by
break out?
                      governments and NGOs alike, for research into these issues [17, 23].”
Summary and      •    2018: “As the influence and scope of these risk assessments increase,
                      academics, policymakers, and journalists have raised concerns that the
conclusion

References
                      statistical models from which they are derived might inadvertently
                      encode human biases (Angwin et al., 2016; O’Neil, 2016).”

                 The technical view of ethics: An overview and critique   6 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 (Current language; 2021 FAccT)
for
Digital
Ethics &
Policy

Goals and        • Singh et al.: “Deployment of machine learning algorithms to aid consequential
outline
                   decisions, such as in medicine, criminal justice, and employment, require revisiting the
                   dominant paradigms of training and testing such algorithms.”
The technical
perspective      • Ron et al.: “Algorithmic decision making plays a fundamental role in many facets of our
                   lives; criminal justice [10, 11, 29], banking [3, 18, 32, 40], online-advertisement [28, 30],
Where does         hiring [1, 2, 4, 7] , and college admission [5, 26, 36] are just a few examples. With the
this lens come
from, and how      abundance of applications in which algorithms operate, concerns about their ethics,
do people
break out?         fairness, and privacy have emerged.”
                 • Black & Frederickson: “Deep networks are becoming the go-to choice for challenging
Summary and
conclusion
                   classification tasks due to their remarkable performance on many high-profile problems:
                   they are used everywhere from recommendation systems [15] to medical research [8,
References
                   21], and increasingly in even more sensitive contexts, such as hiring [46], loan decisions
                   [5, 51], and criminal justice [25]. Their continued rise in adoption has led to growing
                   concerns about the tendency of these models to discriminate against certain individuals
                   [4, 10, 13, 44], or otherwise produce outcomes that are seen as unfair.”

                 The technical view of ethics: An overview and critique   7 of 30      Slides: https://MominMalik.com/cdep2022.pdf
Center

                 (Current language; 2021 FAccT)
for
Digital
Ethics &
Policy

Goals and        •    Nanda et al.: “Automated decision-making systems that are driven by data are
outline
                      being used in a variety of different real-world applications. In many cases, these
The technical         systems make decisions on data points that represent humans (e.g., targeted ads
perspective
                      [44, 53], personalized recommendations [3, 50], hiring [47, 48], credit scoring [31],
Where does
                      or recidivism prediction [9]). In such scenarios, there is often concern regarding the
this lens come
from, and how         fairness of outcomes of the systems [2, 18].”
do people
break out?       •    Taskeen et al.: “Nowadays, machine learning algorithms can uncover complex
                      patterns in the data to produce an exceptional performance that can match, or
Summary and
conclusion            even surpass, that of humans… Algorithms are conceived and function following
                      strict rules of logic and algebra; it is hence natural to expect that machine learning
References
                      algorithms deliver objective predictions and recommendations. Unfortunately, in-
                      depth investigations reveal the excruciating reality that state-of-the-art algorithmic
                      assistance is far from being free of biases.”

                 The technical view of ethics: An overview and critique   8 of 30    Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Comments
for
Digital
Ethics &
Policy

Goals and
outline
                 • Some version of a view: Machine learning has so much
                   promise! But this promise comes with a flip side of
The technical
perspective        unintended harm and consequences, that no one could
Where does
                   have imagined, so we need to address it with the same tools
this lens come
from, and how
do people
                   we use to develop machine learning
break out?
                 • Even if not citing successes, these take the application of
Summary and
conclusion
                   machine learning as a given, or inevitable
References
                 • None acknowledge (for example) the possibility of refusal,
                   or that sometimes this might be a better way forward

                 The technical view of ethics: An overview and critique   9 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Vision of the future (Morozov, 2013)
for
Digital
Ethics &
Policy

Goals and        “If Silicon Valley had a designated
outline
                 futurist, her bright vision of the near
The technical    future… would go something like this:
perspective
                 Humanity, equipped with powerful self-
Where does
                 tracking devices, finally conquers
this lens come
from, and how    obesity, insomnia, and global warming
do people
break out?       as everyone eats less, sleeps better, and
                 emits more appropriately. The fallibility
Summary and
conclusion       of human memory is conquered too, as
                 the very same tracking devices record
References
                 and store everything we do. Car keys,
                 faces, factoids: we will never forget them
                 again…”

                 The technical view of ethics: An overview and critique   10 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Vision of the future (Morozov, 2013)
for
Digital
Ethics &
Policy

Goals and                                                                      “Politics, finally under the constant and far-
outline
                                                                               reaching gaze of the electorate, is freed from all
                                                                               the sleazy corruption, backroom deals, and
The technical
perspective                                                                    inefficient horse trading. Parties are
                                                                               disaggregated and replaced by Groupon-like
Where does                                                                     political campaigns, where users come
this lens come
from, and how                                                                  together—once—to weigh in on issues of direct
do people
break out?                                                                     and immediate relevance to their lives, only to
                                                                               disband shortly afterward. Now that every
Summary and                                                                    word—nay, sound—ever uttered by politicians is
conclusion
                                                                               recorded and stored for posterity, hypocrisy has
References
                                                                               become obsolete as well. Lobbyists of all
                                                                               stripes have gone extinct as the wealth of data
                                                                               about politicians—their schedules, lunch menus,
                                                                               travel expenses— are posted online for
                                                                               everyone to review…”
                 The technical view of ethics: An overview and critique   11 of 30                    Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Vision of the future (Morozov, 2013)
for
Digital
Ethics &
Policy

Goals and        “Crime is a distant memory, while courts
outline
                 are overstaffed and underworked. Both
The technical    physical and virtual environments—walls,
perspective
                 pavements, doors, log-in screens—have
Where does
                 become ‘smart.’ That is, they have
this lens come
from, and how    integrated the plethora of data
do people
break out?       generated by the self-tracking devices
                 and social-networking services so that
Summary and
conclusion       now they can predict and prevent
                 criminal behavior simply by analyzing
References       their users. And as users don’t even have
                 the chance to commit crimes, prisons are
                 no longer needed either. A triumph of
                 humanism, courtesy of Silicon Valley.”
                 The technical view of ethics: An overview and critique   12 of 30   Slides: https://MominMalik.com/cdep2022.pdf
mutual   D, definition
                                                                                           then A is fairbetween
                                                                                                          withmeasure
                                                                                                               respect tothat
                                                                                                                          Bob
                                                                                                                            S on  D.              approach, a regularization
                                                                                                                                                                       con- strategy, adds a reg
                              sensitivity =
                                                    d In  Rp 7! {0, 1},
                                                      d : particular, wewhere
                                                                         include       k information
                                                                                 in =this
                                                                              d(x)       means that any  action  akZisand
                                                                                                                        taken. is small,
                                                                                                                                can      and we from
                                                                                                                                     be computed      the two-by-two
Center                                  b + fusion
                                             d Wematrix         tabulating
                                                        next present        have
                                                                              the
                                                                         several    accomplished
                                                                              4.1additional
                                                                                        Predictability
                                                                                    joint   distributionthe goal
                                                                                                  assumptions ofandofand
                                                                                                                       obsfucating
                                                                                                                      Disparate
                                                                                                                  decisionsnotational  information
                                                                                                                                      Impact
                                                                                                                               d(x) and      outcomes ythat
                                                                                                                                          conventions       to
                                                                                                                                                            forthe    classification
                                                                                                                                                                 a group.
                                                                                                                                                                 are   helpfulBerk    ettraining
                                                                                                                                                                                          al.
                                                                                                                                                                                 in stating       objective that qu

                          The approach
                                                                                                                                                            the  degree     of  bias or   discrimination     (Calders
for                                         (2017)
 The specificity of a test (its true negative  rate)
                                             and     isenumerate
                                                         defined as the
                                                  investigating      common about
                                                                     seven such       the
                                                                                 fairness   protected
                                                                                  Westatistics,
                                                                                       now definitions.
                                                                                              present
                                                                                                           group.
                                                                                                   including      false
                                                                                                               First,
                                                                                                        a formal         positive
                                                                                                                        we  assume
                                                                                                                   definition        rate,
                                                                                                                                        x canfalse
                                                                                                                                                 benegative
                                                                                                                                 of predictability   partitioned
                                                                                                                                                      and wer,  rate,into
                                                                                                                                                                        precision,
                                                                                                                                                                            protectedrecall,
                                                                                                                                                                                         and
 conditional  probability  of returning      on “negative”    examples                                                                                             2010; curve
                                                                                                                                                                            Kamishima       et al., 2011). The sy
Digital                                 NO and    the   proportion
                                             unprotected features: x = of  decisions
                                                                              link
                                                                            Furthermore that
                                                                               (xp ,itxto
                                                                                       u ).the
                                                                                               are
                                                                                               legal
                                                                                             For    positive.
                                                                                                weease
                                                                                                   cannotion
                                                                                                          of      We
                                                                                                               of that  also
                                                                                                                  disparate
                                                                                                             exposition,
                                                                                                           show         evenweinclude
                                                                                                                              impact.    the
                                                                                                                                  oftenRecall
                                                                                                                                though         area
                                                                                                                                         imagine      under
                                                                                                                                                that the
                                                                                                                                          the parity          the ROC
                                                                                                                                                     D =protected features indicate(AUC),
 (a.k.a. the minority) class. I.e.,         aanpopular    measure
                                                individual’s     race among
                                                                        or    (criminologists
                                                                            constraint
                                                                           gender,          does
                                                                                       C ) where
                                                                                X, Y, but    they Xand
                                                                                                  not      practitioners
                                                                                                          directly
                                                                                                      is the
                                                                                                   might             address
                                                                                                             protected
                                                                                                             also  representexamining       theremaining
                                                                                                                                classification,
                                                                                                                         attribute,
                                                                                                                                  other          fairness  ofthen
                                                                                                                                                   un- Second,
                                                                                                                                     Y isattributes.
                                                                                                                                          the
                                                                                                                                                                   trained (Skeem
                                                                                                                                                               algorithms      to maximize
                                                                                                                                                                   for each individual,  and accuracy while min
Ethics &                                  a Lowenkamp,        2016).  4     der   the   current
                                                                              attributes,    and  model
                                                                                                  C  is the  formulation
                                                                                                             class outcome    the
                                                                                                                               to  two
                                                                                                                                  be     are
                                                                                                                                     predicted.closely      discrimination.
                                             we suppose there is a quantity y 2 {0, 1} that specifies the target of prediction. For example, in the pretrial
Policy                    specificity =
                                        a + csetting,
                                                 Twowe   of might
                                                             the above      linked. basis
                                                                            = 1 The
                                                                    set y measures—false
                                                                                         The key
                                                                                  for those for   ourproperty
                                                                                                         formulation
                                                                                                  positive
                                                                                               defendants      rate,is and
                                                                                                                       that
                                                                                                                who would
                                                                                                                               if the
                                                                                                                        is a the
                                                                                                                             procedure
                                                                                                                               have
                                                                                                                                        parity
                                                                                                                                   proportion
                                                                                                                                      committed
                                                                                                                                                  con-
                                                                                                                                           that predicts
                                                                                                                                                   of adecisions
                                                                                                                                                        violent    that
                                                                                                                                                                 crime
                                                                                                                                                            A good       ifare  positive—
                                                                                                                                                                             released,
                                                                                                                                                                       example    from and
                                                                                                                                                                                         the first class is that of (K
                                                                                    i straint is met, then the two groups are treated fairly
                                                                                            from Y. We
                                                                                         Xattention         in would       like a way        to measure          the quality         of this et al., 2018; Calders and
  hood and
Goals
                                 Eliminate correlations with protected attributes (Zemel et al.):
     D EFINITION 3.2• ( L IKELIHOOD RATIO     have = received
                                              yi ( POSITIVE
                                              Verwer,
        ratio positive, denoted by LR+ , is given
                                                       0 otherwise.
                                                             2010;
                                                                      considerable
                                                               )) . The   likeli-
                                                                             Importantly,
                                                                       Chouldechova,
                                                                                      with
                                                                                        predictor
                                                    by only to information encoded in the
                                                                                                   y isinnot
                                                                                              respect
                                                                                               2017;
                                                                                                                the
                                                                                                             atoway
                                                                                                           Edwards
                                                                                                                  the
                                                                                                                       machine
                                                                                                                   known        tocan
                                                                                                                         classification
                                                                                                                       that and a)
                                                                                                                                        learning
                                                                                                                                      the   be
                                                                                                                                     Storkey,
                                                                                                                                              decision    community
                                                                                                                                                  decisions:
                                                                                                                                                 optimized
                                                                                                                                                      2015;
                                                                                                                                                              maker, usingwho
                                                                                                                                                                   Feldman
                                                                                                                                                                                (Agarwal
                                                                                                                                                                                      at the&time
                                                                                                                                                                                standard
                                                                                                                                                                                     et  al.,random
                                                                                                                                                                                                2015;
                                                                                                                                                                                                     Calders,
                                                                                                                                                                                                           of the2009),
                                                                                                                                                                                                          Hardt
                                                                                                                                                                                                                      decision
                                                                                                                                                                                                                     etremove
                                                                                                                                                                                                                         al.,
                                                                                                                                                                                                                              wherehas they ”massage” the t
                                                                                                                                                                                                                                2016;
                                              access                                                                X visible
                                                                                        predictors in1 machine learning and          features        x.
                                                                                                                                                      X      Third,
                                                                                                                                              1 b) can be related to LR+ .we     define          data    labels   to
                                                                                                                                                                                                           variables     X    and  theY discrimination with t
outline                                       Kamiran         et    al.,  2013;    Pedreshi      et = al.,    2008;    ZafarM  n,ket  =al.,drawn
                                                                                                                                               2015,randomly2017;
                                                                                                                                                               Mn,kZemel )          etthe
                                                                                                                                                                                        al.,population
                                                                                                                                                                                               2013).      Formally,      parity     in
                                sensitivity   that    take
                                                   d/(b + d)  on     values   X    =  x   and
                                                                                        The     Y
                                                                                               standard   y
                                                                                                        |Xalsofor
                                                                                                              +    an
                                                                                                               notions  individual
                                                                                                                            of  accuracy         of  a    classifier     fail from
                                                                                                                                                                                to   do  the     possible     changes.
                                                                                                                                                                                                               of   interestThe     initial
                                                                                                                                                                                                                                (e.g.,      step involves rank
              LR+ (C, X ) =                   the
                                              = proportion               of positive second
                                                                                         decisions,          0 | known                     |X0 |
                                                                                                                                as demographic                 parity        (Feldman         ettraining
                                                                                                                                                                                                   al., 2015),     means based that on the posterior proba
                                                                                                    (as          n2X0+ earlier)
                                                                                                           discussed                      and    using
                                                                                                                                                    n2X0LR           directly       fails to                  examples
                              1 specificity   the    population          of  defendants       for   whom        pretrial        decisions         must        be    made).          We   use    X     and    X     to   denote     the
                                              In c/    ( a + c)                                                                                                  +                                 p            u
                                                   particular,         we include satisfyin thisthe   definition
                                                                                                          first     Xany measure
                                                                                                            1 constraint.                     1 that   Xcan be computed from                            the two-by-two           con-
                                              projections of x onto its protected                         and     unprotected              components.               Fourth,
                                                                                                                                                                          )          we define   of positive
                                                                                                                                                                                                     the truelabels         obtained from a Naive-Baye
                                                                                                                                                                                                                    risk function
TheWe    can now restate the 80% rule in fusion
      technical                               terms ofmatrixa data set.tabulating theThe      joint
                                                                                                  error   Pr(d(X)
                                                                                                        distribution
                                                                                                            measure
                                                                                                              +           =
                                                                                                                          weM  1
                                                                                                                               n w
                                                                                                                                of|
                                                                                                                                seek X=    ) =
                                                                                                                                     decisions   Pr(d(X)
                                                                                                                                        pturns outd(x)    to   M
                                                                                                                                                              be    =
                                                                                                                                                                   and
                                                                                                                                                                   the
                                                                                                                                                                    n w1), outcomes
                                                                                                                                                                         balanced      error  y  for   a   group.      Berk     et (2)
                                                                                                                                                                                                                                    al.
perspective                                   r(x) = Pr(Y = 1 | X = x). Finally,                        |X0 | we +      note that|X            0 |
                                                                                                                                             many         risk assessment algorithms,            fier trained
                                                                                                                                                                                                            instead on the     original dataset. They the
                                                                                                                                                                                                                         of simply
     D EFINITION 3.3 (D ISPARATE I MPACT      (2017)       enumerate
                                                    ). A data a              seven       rate
                                                                                       such   BER   .
                                                                                               statistics,         including
                                                                                                                 n2X                  false
                                                                   set has dis- a or a , produce a risk score s(x) that may be viewed as the
                                                                                                                        0                       positive
                                                                                                                                                    n2X     0    rate,      false     negative       rate,   precision,
                                                                                                                                                                                                      set of highest-ranked
                                                                                                                                                                                                                               recall,
                                              outputting
                                              and    parity      of decision
                                                                      false positive       rates
                                                                                              1 means that                                        Xinclude the area underan                           approximation            of thenegatively-labeled item
 parate impact if                             and     the proportion                 0
                                                                                of decisions       that 1are X      positive. +       We  1 also                                                   the ROC curve (AUC),
                                              true    risk    r(x).     In  reality,    s(x) D EFINITION
                                                                                                may     only  +  4.1
                                                                                                                  be      (BER).
                                                                                                                            y
                                                                                                                       looselyn  =       Let
                                                                                                                                      related  f  :  Y
                                                                                                                                                    to     !
                                                                                                                                                           y
                                                                                                                                                           the
                                                                                                                                                             n .Xtruebe  a   predictor
                                                                                                                                                                          risk,      and   of
                                                                                                                                                                                            s(x) themayprotected
                                                                                                                                                                                                            not   even set  and
                                                                                                                                                                                                                          lie       change their labels. The
                                                                                                                                                                                                                               in and
                                                                                                                                                                                                                                   the
                                              a popular measure among Xcriminologists       from    Y.  |X
                                                                                                         The
                                                                                                                   and+ practitioners
                                                                                                             0 |balanced        error |X0rate | BER examining
                                                                                                                                                           of  f   on
                                                                                                                                                                          the fairness
                                                                                                                                                                       distribution
                                                                                                                                                                                                of algorithms (Skeem
                                         1    interval      [0,  1]   (e.g.,   s(x)   2     Pr(d(X)
                                                                                          {1,  2, . . . ,    =
                                                                                                           10}  1   |
                                                                                                                  may Y   =    0,
                                                                                                                           representX     )  =a  Pr(d(X)
                                                                                                                                                 risk      decile).=   1   |
                                                                                                                                                                          To  Y  go=  0).
                                                                                                                                                                                      fromD      this
                                                                                                                                                                                                risk    set
                                                                                                                                                                                                      scores is   chosen
                                                                                                                                                                                                                 to           to
                                                                                                                                                                                                                      decisions,  make
                                                                                                                                                                                                                                   (3)it the proportion of
Where does                                    1.25
                         LR+ (C, X ) > = Lowenkamp,                  2016).4
                                                                                                                 n2X    0              p        n2X     0

this lens come
from,
                        •        Define a metric that has a provable relationship to the “80% rule” (Feldman et al.):
                                        t     is  common
                                                    Two            to  simply
                                                              of thepretrial
                                                                          above example,
                                                                                         over thethe
                                                                                   threshold
                                                                                        This
                                                                                                     pairscore,
                                                                                                 property
                                                                                     measures—false
                                                                                         conditioned
                                                                                                             ( X, Y )setting
                                                                                                                       is defined
                                                                                                                   follows
                                                                                                           errorpositive
                                                                                                                  of  f . In     from
                                                                                                                                 rate,
                                                                                                                               other
                                                                                                                                       as the
                                                                                                                                     d(x)
                                                                                                                                       words,
                                                                                                                                              =   (unweighted)
                                                                                                                                                  1
                                                                                                                                             the linear
                                                                                                                                             and       if   and
                                                                                                                                                      the proportiononly average
                                                                                                                                                                              if
                                                                                                                                                                   classification
                                                                                                                                                                                      class- t labels
                                                                                                                                                                                   s(x)
                                                                                                                                                                                     of decisions for  someequal
                                                                                                                                                                                                          that
                                                                                                                                                                                                                    in
                                                                                                                                                                                                                fixed   the
                                                                                                                                                                                                                  arerace
                                                                                                                                                                                                                              two
                                                                                                                                                                                                                         threshold
                                                                                                                                                                                                                        positive—
                                                                                                                                                                                                                                    groups;   the ranking app
    It and
        willhow
             be convenient to work with In     the our    running
                                                     reciprocal       of
                                              t 2 R.received considerable LR  + ,                  demographic              parity      means        that     detention            rates   are    equal
                                                                                                                                                                                                 used    toacross
                                                                                                                                                                                                             minimize       groups;
                                                                                                                                                                                                                            the   impact    on  the system’s a
do  people                                    have                                    approach.
                                                                                          attention         in the     machine           learning         community              (Agarwal         et al.,   2018;     Calders     and
 which    we denote by                        and Withparitythis of false     positive      rates    means        that
                                                                                                                     Pr [ f among
                                                                                                                            ( Y )  =   0  |defendants
                                                                                                                                            X  =  1  ]  +   Pr [ who
                                                                                                                                                                 f ( Y )  would
                                                                                                                                                                          =    1 | X  not
                                                                                                                                                                                      =  0 ]  have
                                                                                                                                                                                                 in   gone
                                                                                                                                                                                                     predictingon    to  commit
                                                                                                                                                                                                                       the            a
                                                                                                                                                                                                                              classification   labels.    The m
break out?                                    Verwer,
                                              violent       2010; ifsetup,
                                                           crime
                                                                                  we now
                                                                       Chouldechova,
                                                                          released,
                                                                                              describe
                                                                                        detention          X )three
                                                                                               ( f (Y ),rates
                                                                                          BER2017;              = arepopular
                                                                                                           Edwards          and Storkey,
                                                                                                                            equal
                                                                                                                                        definitions
                                                                                                                                       across          2015;
                                                                                                                                                    race
                                                                                                                                                              of Feldman
                                                                                                                                                                   algorithmic
                                                                                                                                                              groups.                et fairness.
                                                                                                                                                                                         al., 2015;parity
                                                                                                                                                                                Demographic               Hardtis et  notal.,strictly
                                                                                                                                                                                                                                2016;
                                       1                                              Another        key     property         of  the     model        is2  the    fact    that      the         data    is then     used   for  learning    a classifier for fut
                           DI =               Kamiran
                                               .
                                              speaking        etmeasure
                                                              a    al., 2013;ofPedreshi
                                                                                    “error”,     et al.,
                                                                                                 but    we    2008;    Zafar etinclude
                                                                                                               nonetheless               al., 2015, it      2017;classification
                                                                                                                                                          under        Zemel et al.,parity     2013).since Formally,
                                                                                                                                                                                                                 it  can  parity
                                                                                                                                                                                                                           be   com- in
                                                                                      mapping         to
                                                                                             D EFINITION     Z   is   defined         for    any      individual           x    2     X.
                        •
                                  LR+ (C, X )the
                                              puted  proportion
                                                         from a confusionof positive           first
                                                                                         decisions,
                                                                                     matrix.      We       also4.2
                                                                                                        definition
                                                                                                         note      known
                                                                                                                  that
                                                                                                                          ( P REDICTABILITY
                                 Express anti-classification, classification parity, and calibration (Corbett-Davies & Goel):
                                              Anti-classification.                      The                               we    consider
                                                                                                                                as demographic
                                                                                                                           demographic
                                                                                                                                                      ) . X isparity
                                                                                                                                                 is parity       said to be
                                                                                                                                                     anti-classification,        e-predictable
                                                                                                                                                                               closelymeaning
                                                                                                                                                                      also(Feldman                     that
                                                                                                                                                                                               et al.,to 2015),decisions
                                                                                                                                                                                                                    means do   thatnot
Summary    and                                                                           from Y    if there    exists  a function         f :Y!          X suchisthat                      related          anti-classification,
 This will    allow us to discuss the value       associated
                                              consider              with
                                                             protected      dis-
                                                                              attributes.       Formally,         anti-classification               requires
                                              since it requires that a classifier’s predictions d(X) be independent of protected group membership Xp .               that:
conclusion
 parate impact before the threshold is applied.                                                           Pr(d(X) =        BER 1 (| fX
                                                                                                                                     (Yp)), X=) Pr(d(X)
                                                                                                                                                   e.              = 1),                                                          (2)
                                                                                                        d(x) = d(x0 ) for all x, x0 such that xu = x0 .
                                                                                                                              u                                                                                          (1)
 Multiple classes. Disparate impact isand
                                       defined only
                                                 of for
                                      Calibration.
                                          parity    falsetwo
                                                         Finally,
                                                           positivethe This motivates
                                                                        third
                                                                      rates           our definition
                                                                               definition
                                                                             means  that  of fairnessof e-fairness,
                                                                                                        we consider as aisdata set that meaning that outcomes
                                                                                                                           calibration,
 classes. In general, one might imagine a multivalued  class        is not predictable.
                                      should be independent of protected attributes conditional on risk score. In the pretrial context, calibration
References                                      Some authors have suggested stronger notions of anti-classification that aim to guard against the use of unpro-
                                                means     that among
                                                tected traits    that aredefendants     with
                                                                            proxiesPr(d(X)
                                                                                      for     a=given
                                                                                          protected     risk
                                                                                                         = 0,score,
                                                                                                  1 | Yattributes      the
                                                                                                                Xp )(Bonchi
                                                                                                                      =      proportion
                                                                                                                         Pr(d(X)        1 |who
                                                                                                                                     = 2017;
                                                                                                                                et al.,     Y = would
                                                                                                                                                 0). reo↵end
                                                                                                                                               Grgic-Hlaca         if released
                                                                                                                                                             et al.,            is the
                                                                                                                                                                      2016; Johnson (3)
                                                same     across race  groups.    Formally,   given  risk  scores  s(x),    calibration   is satisfied when
                                                et al., 2016; Qureshi et al., 2016). We will demonstrate, however, that the exclusion of any information—
                                                In our running pretrial example, demographic parity means that detention rates are equal across race groups;
                                                including features that         are explicitly protected—can lead to discriminatory decisions. As a result, it is
                                                                            261 ratesPr(Y
                                                and parity of false positive               means      | s(X),
                                                                                                = 1that   among Xp )defendants
                                                                                                                      = Pr(Y = who 1 | s(X)).
                                                                                                                                          would not have gone on to commit(4)a
                                                sufficient for our purposes to consider the weak version                of anti-classification articulated in Eq. (1).
                                                violent crime if released, detention rates are equal across race groups. Demographic parity is not strictly
                                                Note thataifmeasure
                                                speaking         s(x) = r(x),    then the
                                                                          of “error”,   butrisk  scores trivially
                                                                                             we nonetheless          satisfy
                                                                                                                include        calibration.
                                                                                                                            it under   classification parity since it can be com-
                                                Classification         parity.       The  second   definition
                                                puted from a confusion matrix. We note that demographic parity   of  fairness    we isconsider  is classification
                                                                                                                                       also closely                 parity, meaning
                                                                                                                                                     related to anti-classification,
                                                that
                                                2.3 it
                                                since
                          The technical view of ethics:
                                                        some   given
                                                        AnUtility
                                                           requires   measure
                                                                      functions
                                                                     that
                                                            overview and
                                                                                  of classification
                                                                                       andpredictions
                                                                           a classifier’s
                                                                         critique
                                                                                                      error
                                                                                              threshold      is equal
                                                                                                               rules
                                                                                                          d(X)13 be      across   groups    defined  by  the protected
                                                                                                                      independent of protected group membership            attributes.
                                                                                                                                                                               Xp .
                                                                                                                                                               Slides: https://MominMalik.com/cdep2022.pdf
                                                                                                                  of 30
Center

                 The wall of the technical perspective
for
Digital
Ethics &
Policy

Goals and
outline          •    Alexandra Chouldechova (2017) showed that we
The technical
                      cannot simultaneously satisfy three specific metrics:
perspective
                      accuracy equality (equal accuracy across groups),
Where does
this lens come
from, and how
                      equal opportunity (equal false negative rate across
do people
break out?            groups), and predictive parity (equal precision
Summary and
                      [positive predictive value] across groups)
conclusion

                      – (Partially what the COMPAS debate is about)
                      So now, ML moves to: rely on domain experts to
References

                 •
                      determine what fairness metric we should use
                 The technical view of ethics: An overview and critique   14 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center           developed the fairness tree depicted in Figure 11.1. Although it cer-
                 Landscape for fairness (Rodolfo et al.)
for
Digital          tainly cannot provide a single “right” answer for a given context, our
Ethics &
Policy

Goals and
outline

The technical
perspective

Where does
this lens come
from, and how
do people
break out?

Summary and
conclusion

References

                 The technical view of ethics: An overview and critique   15 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Where this fits in
for
Digital
Ethics &
Policy

Goals and
outline          •    This diagram is supremely useful, and can and
The technical
perspective
                      should be a basis for auditing/formal analysis
Where does
                      when we choose to use machine learning (or
                      when we analyze an existing system)
this lens come
from, and how
do people
break out?

Summary and
conclusion
                 •    From a technical perspective, this is maybe as far
References
                      as we can go
                 •    But that doesn’t mean that there’s not a lot
                      further to go
                 The technical view of ethics: An overview and critique   16 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 The idea of limits to abstraction is novel
for
Digital
Ethics &
Policy

Goals and        • Reads as a fairly straightforward STS
outline
                   primer for outsiders                                                                 Fairness and Abstraction in Sociotechnical Systems

                 • But for some CS insiders, it was earth-
                                                                                                      Andrew D. Selbst                                                  danah boyd                                     Sorelle A. Friedler
                                                                                            Data & Society Research Institute                                Microsoft Research and                                     Haverford College
                                                                                                     New York, NY                                        Data & Society Research Institute                                 Haverford, PA

The technical                                                                                   andrew@datasociety.net                                            New York, NY                                       sorelle@cs.haverford.edu

                   shattering to consider the limits to
                                                                                                                                                              danah@datasociety.net

perspective                                                                                                               Suresh Venkatasubramanian                                               Janet Vertesi
                                                                                                                                      University of Utah                                       Princeton University

                   abstraction
                                                                                                                                      Salt Lake City, UT                                           Princeton, NJ
                                                                                                                                     suresh@cs.utah.edu                                      jvertesi@princeton.edu

                                                                                     ABSTRACT                                                                                          Systems. In FAT* ’19: Conference on Fairness, Accountability, and Trans-
                                                                                     A key goal of the fair-ML community is to develop machine-learning                                parency (FAT* ’19), January 29–31, 2019, Atlanta, GA, USA. ACM, New York,

                 • Still, even for many of those people, it
                                                                                                                                                                                       NY, USA, 10 pages. https://doi.org/10.1145/3287560.3287598
Where does                                                                           based systems that, once introduced into a social context, can
                                                                                     achieve social and legal outcomes such as fairness, justice, and
                                                                                     due process. Bedrock concepts in computer science—such as ab-
this lens come                                                                       straction and modular design—are used to de�ne notions of fairness
                                                                                                                                                                                       1 INTRODUCTION

                   represented an endpoint; having pointed
                                                                                                                                                                                       On the typical �rst day of an introductory computer science course,
                                                                                     and discrimination, to produce fairness-aware learning algorithms,
from, and how                                                                        and to intervene at di�erent stages of a decision-making pipeline
                                                                                     to produce "fair" outcomes. In this paper, however, we contend
                                                                                                                                                                                       the notion of abstraction is explained. Students learn that systems
                                                                                                                                                                                       can be described as black boxes, de�ned precisely by their inputs,

do people                                                                            that these concepts render technical interventions ine�ective, in-
                                                                                                                                                                                       outputs, and the relationship between them. Desirable properties
                                                                                                                                                                                       of a system can then be described in terms of inputs and outputs

                   out the limits of abstraction, we are done,
                                                                                     accurate, and sometimes dangerously misguided when they enter
break out?                                                                           the societal context that surrounds decision-making systems. We
                                                                                     outline this mismatch with �ve "traps" that fair-ML work can fall
                                                                                                                                                                                       alone: the internals of the system and the provenance of the inputs
                                                                                                                                                                                       and outputs have been abstracted away.
                                                                                                                                                                                          Machine learning systems are designed and built to achieve
                                                                                     into even as it attempts to be more context-aware in comparison to
                                                                                                                                                                                       speci�c goals and performance metrics (e.g., AUC, precision, recall).

                   and there’s nothing more to do (other
                                                                                     traditional data science. We draw on studies of sociotechnical sys-
                                                                                                                                                                                       Thus far, the �eld of fairness-aware machine learning (fair-ML) has
                                                                                     tems in Science and Technology Studies to explain why such traps
                                                                                                                                                                                       been focused on trying to engineer fairer and more just machine
                                                                                     occur and how to avoid them. Finally, we suggest ways in which
                                                                                                                                                                                       learning algorithms and models by using fairness itself as a property
                                                                                     technical designers can mitigate the traps through a refocusing of
                                                                                                                                                                                       of the (black box) system. Many papers have been written proposing
                                                                                     design in terms of process rather than solutions, and by drawing

                   than get back to working on those
                                                                                                                                                                                       de�nitions of fairness, and then based on those, generating best
Summary and                                                                          abstraction boundaries to include social actors rather than purely
                                                                                     technical ones.
                                                                                                                                                                                       approximations or fairness guarantees based on hard constraints
                                                                                                                                                                                       or fairness metrics [24, 32, 39, 40, 72]. Almost all of these papers
conclusion                                                                           CCS CONCEPTS
                                                                                                                                                                                       bound the system of interest narrowly. They consider the machine

                   abstractions).
                                                                                                                                                                                       learning model, the inputs, and the outputs, and abstract away any
                                                                                     • Applied computing → Law, social and behavioral sciences;                                        context that surrounds this system.
                                                                                     • Computing methodologies → Machine learning;                                                        We contend that by abstracting away the social context in which
                                                                                                                                                                                       these systems will be deployed, fair-ML researchers miss the broader
                                                                                     KEYWORDS                                                                                          context, including information necessary to create fairer outcomes,

                 • I.e., could exist within the same
                                                                                     Fairness-aware Machine Learning, Sociotechnical Systems, Inter-                                   or even to understand fairness as a concept. Ultimately, this is be-
                                                                                     disciplinary                                                                                      cause while performance metrics are properties of systems in total,

References                                                                           ACM Reference Format:
                                                                                     Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubrama-
                                                                                                                                                                                       technical systems are subsystems. Fairness and justice are prop-
                                                                                                                                                                                       erties of social and legal systems like employment and criminal
                                                                                                                                                                                       justice, not properties of the technical tools within. To treat fairness

                   assumptions of inevitability of using
                                                                                     nian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical
                                                                                                                                                                                       and justice as terms that have meaningful application to technology
                                                                                                                                                                                       separate from a social context is therefore to make a category error,
                                                                                     Permission to make digital or hard copies of all or part of this work for personal or
                                                                                     classroom use is granted without fee provided that copies are not made or distributed
                                                                                                                                                                                       or as we posit here, an abstraction error.
                                                                                     for pro�t or commercial advantage and that copies bear this notice and the full citation             In this paper, we identify �ve failure modes of this abstraction

                   abstractions/ building systems
                                                                                     on the �rst page. Copyrights for components of this work owned by others than ACM                 error. We call these the Framing Trap, Portability Trap, Formalism
                                                                                     must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
                                                                                     to post on servers or to redistribute to lists, requires prior speci�c permission and/or a        Trap, Ripple E�ect Trap, and Solutionism Trap. Each of these traps
                                                                                     fee. Request permissions from permissions@acm.org.                                                arises from failing to consider how social context is interlaced with
                                                                                     FAT* ’19, January 29–31, 2019, Atlanta, GA, USA                                                   technology in di�erent forms, and thus the remedies also require a
                                                                                     © 2019 Association for Computing Machinery.
                                                                                     ACM ISBN 978-1-4503-6125-5/19/01. . . $15.00                                                      deeper understanding of "the social" to resolve problems [1]. After
                                                                                     https://doi.org/10.1145/3287560.3287598                                                           explaining each of these traps and their consequences, we draw on

                                                                                                                                                                                  59

                 The technical view of ethics: An overview and critique   17 of 30                                                                                          Slides: https://MominMalik.com/cdep2022.pdf
Center
for
Digital
Ethics &
Policy

Goals and
outline

The technical
perspective
                 Where does this lens come from, and how
Where does
this lens come
                 do people break out?
from, and how
do people
break out?

Summary and
conclusion

References

                 The technical view of ethics: An overview and critique   18 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Phil Agre [ey-gree]
for
Digital
Ethics &
Policy

Goals and
                                                                             •       PhD in 1989 from MIT (EECS)
outline
                                                                             •       Influential works:
                                                                                     –   “Surveillance and Capture: Two Models of Privacy”
The technical                                                                            (1994)
perspective
                                                                                     –   “The Soul Gained and Lost: Artificial Intelligence as a
                                                                                         Philosophical Project” (1995)
Where does                                                                           –   Computation and Human Experience (1997)
this lens come
from, and how                                                                        –   Red Rock Eater News Service (1996-2002)
do people
break out?                                                                   •       Former associate professor at UCLA
                                                                                     –   Sister filed missing persons report in October 2009,
                                                                                         after not seeing him since Spring 2008 and learning
Summary and                                                                              he abandoned his job and apartment
conclusion
                                                                                     –   Found by LA County Sheriff’s Department in January
                                                                                         2010
References
                                                                             •       Won’t focus on him personally, but instead on his
                                                                                     1997 piece “Towards a critical technical practice:
                                                                                     Lessons learned trying to reform AI”

                 The technical view of ethics: An overview and critique   19 of 30                              Slides: https://MominMalik.com/cdep2022.pdf
Center

                 From AI to social sciences
for
Digital
Ethics &
Policy

Goals and
outline
                  “My ability to move intellectually from AI to the social sciences —
                  that is, to stop thinking the way that AI people think, and to
The technical
perspective
                  start thinking the way that social scientists think — had a
                  remarkably large and diverse set of historical conditions. AI has
Where does
this lens come    never had much of a reflexive critical practice, any more than any
from, and how
do people
break out?
                  other technical field. Criticisms of the field, no matter how
                  sophisticated and scholarly they might be, are certain to be met
Summary and
conclusion        with the assertion that the author simply fails to understand a basic
                  point. And so, even though I was convinced that the field was
                  misguided and stuck, it took tremendous effort and good
References

                  fortune to understand how and why.”

                 The technical view of ethics: An overview and critique   20 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Autobiographical account of a crisis
for
Digital
Ethics &
Policy

Goals and         “My college did not require me to take many humanities courses, or learn to write in
outline
                  a professional register, and so I arrived in graduate school at MIT with little
The technical     genuine knowledge beyond math and computers. This realization hit me with
perspective
                  great force halfway through my first year of graduate school…
Where does        “fifteen years ago, I had absolutely no critical tools with which to defamiliarize those
this lens come
from, and how     ideas — to see their contingency or imagine alternatives to them. Even worse, I was
do people
break out?        unable to turn to other, nontechnical fields for inspiration. As an AI practitioner
                  already well immersed in the literature, I had incorporated the field's taste for
Summary and
conclusion        technical formalization so thoroughly into my own cognitive style that I literally could
                  not read the literatures of nontechnical fields at anything beyond a popular level.
References
                  The problem was not exactly that I could not understand the vocabulary, but
                  that I insisted on trying to read everything as a narration of the workings of a
                  mechanism.”

                 The technical view of ethics: An overview and critique   21 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Some other perspectives
for
Digital
Ethics &
Policy

Goals and
outline          •    Malazita & Resetarb, 2019, “Infrastructures of
The technical
                      abstraction: how computer science education
perspective
                      produces anti-political subjects”
Where does
this lens come
from, and how
                 •    Hanna Wallach, 2018: “Spoiler alert: The punchline is
                      simple. Despite all the hype, machine learning is not
do people
break out?

Summary and
conclusion
                      a be-all and end-all solution. We still need social
                      scientists if we are going to use machine learning to
                      study social phenomena in a responsible and ethical
References

                      manner.”
                 The technical view of ethics: An overview and critique   22 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Critical “awakening”
for
Digital
Ethics &
Policy

Goals and
outline
                 “At first I found [critical] texts impenetrable, not
                 only because of their irreducible difficulty but also
                 because I was still tacitly attempting to read
The technical
perspective

Where does
this lens come
                 everything as a specification for a technical
                 mechanism… My first intellectual breakthrough
from, and how
do people
break out?

                 came when, for reasons I do not recall, it finally
                 occurred to me to stop translating these strange
Summary and
conclusion

References       disciplinary languages into technical schemata,
                 and instead simply to learn them on their own
                 terms…”
                 The technical view of ethics: An overview and critique   23 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Critical “awakening”
for
Digital
Ethics &
Policy

Goals and
outline          “I still remember the vertigo I felt during this
The technical
                 period; I was speaking these strange disciplinary
                 languages, in a wobbly fashion at first, without
perspective

                 knowing what they meant — without knowing what
Where does
this lens come
from, and how
do people
break out?
                 sort of meaning they had…
Summary and
conclusion       “In retrospect, this was the period during which I
References
                 began to ‘wake up’, breaking out of a technical
                 cognitive style that I now regard as extremely
                 constricting.”
                 The technical view of ethics: An overview and critique   24 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Theorizing this process
for
Digital
Ethics &
Policy

Goals and
outline          • This bears remarkable resemblances to
The technical      Paulo Freire’s idea of critical
                   consciousness: become aware of our
perspective

                   place in society to work for its betterment
Where does
this lens come
from, and how
do people
break out?

Summary and
                 • Follow-up work in education (specifically,
                   Mezirow on “perspective
conclusion

                   transformation”; 1978) theorizes this
References

                   process
                 The technical view of ethics: An overview and critique   25 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Perspective transformation vs. Agre
for
Digital
Ethics &
Policy

Goals and
                  ✓     1. A disorienting dilemma
outline
                  ✓     2. Self-examination with feelings of guilt or shame
The technical
                  ✓     3. A critical assessment of assumptions
perspective
                  ✗     4. Recognition that one’s discontent and process of transformation are
                           shared and that others have negotiated a similar change
Where does
this lens come    ✗     5. Exploration of options for new roles, relationships, and actions
from, and how
do people
break out?
                  ?     6. Planning of a course of action
                  ?     7. Acquisition of knowledge and skills for implementing one’s plans
Summary and       ?     8. Provisionally trying out new roles
conclusion
                  ?     9. Building of competence and self-confidence in new roles and
References                 relationships
                   ?    10.A reintegration into one’s life on the basis of conditions dictated by
                           one’s new perspective.

                 The technical view of ethics: An overview and critique   26 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Who experiences, how and why?
for
Digital
Ethics &
Policy

Goals and
outline          •    Mezirow doesn’t get at who experiences a
The technical
                      perspective transformation
perspective
                      – Empirical evidence/experience seems to be insufficient
Where does
this lens come
from, and how
                      – Having a “disorienting dilemma”, but then reflecting about
do people
break out?              it
Summary and
conclusion
                 •    Work after Mezirow (Taylor & Snyder, 2012): went
                      beyond the “rationalist” framing, recognized that
                      self-actualization is not the only goal, recognized key
References

                      role of interpersonal relationships
                 The technical view of ethics: An overview and critique   27 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Ethics and interventions
for
Digital
Ethics &
Policy

Goals and        •    I contend: Connecting to critical consciousness gives us a roadmap for “ethics”
outline
                      more important than ethical frameworks, or formal ethical reasoning: or at least
The technical         necessary, if not sufficient
perspective
                 •    Interventions: build community with others who have negotiated a similar change;
Where does            form coalitions with others; leverage our privilege, e.g., to oppose gatekeeping
this lens come
from, and how         and bring in others, support the right of refusal; mentor others; give feedback to
do people
break out?            invest spontaneous actions with biographical significance

Summary and
                 •    Insofar as we maintain civilization on the current scale, abstraction is necessary: just
conclusion
                      because some things aren’t current formalized doesn’t mean they can’t be. Even
                      developing critical consciousness maybe could be included in formal education
References
                      (Trbušić, 2014)
                      –     As a minimum of where, beyond a technical perspective, we can try to get technical
                            people: allowing for the right of refusal, and for the option of opposing adoptions of ML in
                            any given case
                 The technical view of ethics: An overview and critique   28 of 30             Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Assumptions in social research
for
Digital
Ethics &
Policy

Goals and        Issue                  Positivism               Postpositivism               Critical theory et al.          Constructivism                 Participatory
outline
                 Ontology               Reality independent      Reality is “real” but only   There is a reality but it is    Relativism                     Participative: multiple co-
                                        of, prior to human       imperfectly and              secret/hidden                                                  created realities
                                        conception of it and     approximately
The technical
perspective                             apprehensible.           apprehensible

                 Epistemology           Singular,                Findings are                 Truth is mediated by value;     Transactional/subjectivist;    Come to know things through
Where does                              perspective-             provisionally true,          how we come to know             co-created findings            involving other people
this lens come                          independent,             affected/distorted by        something matters for what
from, and how                           neutral, atemporal,      society; multiple            how meaningful it is
do people                               universally true         descriptions possible
break out?                              findings                 but equivalent

                 Methodology            Experimental/            Falsification of             Dialogic/dialectical            Hermeneutical/dialectical      Collaborative, action-
Summary and                             manipulative;            hypotheses; some qual,                                                                      oriented; flatten hierarchies,
conclusion                              verification of          but only in service of                                                                      jointly decide to engage in
                                        hypotheses               quant                                                                                       action

References       Axiology               Quant knowledge,         Quant knowledge most         Marginalization is important,   Value is relative; for us,     Everyone is valuable;
                                        people who have,         valuable, but qual can       people who have it have         understanding process of       Reflexivity, co-created
                                        have ultimate            serve it                     unique insights                 construction is valuable       knowledge, non- western
                                        valuable                                                                                                             ways of knowing to combat
                                                                                                                                                             erasure and dehumanization

                 Assumptions of social research paradigms (Malik & Malik, 2021). Based on Guba and Lincoln’s (2005) “Basic
                 beliefs (metaphysics) of alternative inquiry paradigms.”
                 The technical view of ethics: An overview and critique                             29 of 30                                    Slides: https://MominMalik.com/cdep2022.pdf
Center

                 Summary and conclusion
for
Digital
Ethics &
Policy

Goals and
outline
                 •    Technical perspective engenders a view where abstraction is the
                      only legitimate way to engage with the world
The technical
perspective      •    It fails to inculcate awareness of or appreciation of the limits of
                      abstraction, or the possibility of sometimes rejecting abstraction
Where does
this lens come
from, and how    •    Breaking out of this view is both difficult, requiring additional
do people
break out?
                      biographical inputs, and disorienting
Summary and      •    But this is necessary to get people engaged in ethical reasoning
conclusion

                 •    Ideally, this will go beyond what can pass as a “sociotechnical”
References
                      perspective, to a fully constructivist, critical, and even participatory
                      perspective
                                                  Thank you!
                 The technical view of ethics: An overview and critique   30 of 30   Slides: https://MominMalik.com/cdep2022.pdf
Center

                 References
for
Digital
Ethics &
Policy
                 Agre, P. E. (1997). Towards a critical technical practice: Lessons learned trying to reform AI. In G. Rodolfa, K. T., Saleiro, P., & Ghani, R. (2021). Bias and fairness. In I. Foster, R. Ghani, R. S. Jarmin,
Goals and             Bowker, S. L. Star, W. Turner, & L. Gasser (Eds.), Social science technical systems and               F. Kreuter, & J. Lane, Big data and social science: Data science methods and tools for
outline                cooperative work: Beyond the great divide (pp. 131–157). Lawrence Erlbaum Associates,                   research and practice (pp. 281–312). CRC Press.
                       Inc. https://doi.org/10.4324/9781315805849-14                                                           https://doi.org/10.1201/9780429324383-11
                 Black, E., & Fredrikson, M. (2021). Leave-one-out unfairness. In Proceedings of the 2021 ACM            Ron, T., Ben-Porat, O., & Shalit, U. (2021). Corporate social responsibility via multi-armed bandits.
The technical          Conference on Fairness, Accountability, and Transparency (FAccT ’21) (pp. 285–295).                     In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
perspective            https://doi.org/10.1145/3442188.3445894                                                                 (FAccT ’21) (pp. 26–40). https://doi.org/10.1145/3442188.3445868
                 Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism    Selbst, A. D., boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and
                      prediction instruments. Big data, 5(2), 153–163. Https://doi.org/10.1089/big.2016.0047           abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness,
                 Corbett-Davies, D., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review      Accountability, and Transparency (FAT* ’19) (pp. 59–68).
Where does
this lens come        of fair machine learning. https://arxiv.org/abs/1808.00023                                       https://doi.org/10.1145/3287560.3287598
from, and how    Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015).            Singh, H., Singh, R., Mhasawade, V., & Chunara, R. (2021). Fairness violations and mitigation
do people             Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD                         under covariate shift. In Proceedings of the 2021 ACM Conference on Fairness,
break out?            International Conference on Knowledge Discovery and Data Mining (KDD ’15) (259–268).                    Accountability, and Transparency (FAccT ’21) (pp. 3–13).
                      https://doi.org/10.1145/2783258.2783311                                                                  https://doi.org/10.1145/3442188.3445865
                 Malazita, J. W., & Resetar, K. (2019). Infrastructures of abstraction: How computer science             Taskesen, B., Blanchet, J., Kuhn, D., & Nguyen, V. A. (2021). A statistical test for probabilistic
Summary and           education produces anti-political subjects. Digital Creativity, 30(4), 300-312.                         fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and
conclusion            https://doi.org/10.1080/14626268.2019.1682616                                                           Transparency (FAccT ’21) (pp. 648–665). https://doi.org/10.1145/3442188.3445927
                 Malik, M. M., & Malik, M. (2021). Critical technical awakenings. Journal of Social Computing, 2(4), Taylor, E. W., & Snyder, M. J. (2012). A critical review of research on transformative learning
                       365–384. https://doi.org/10.23919/JSC.2021.0035                                                      theory, 2006–2010. In E. W. Taylor & P. Cranton (Eds.), The handbook of transformative
                                                                                                                            learning: Theory, research, and practice (pp. 37–55). San Francisco: Jossey-Bass.
References       Mezirow, J. (1978). Perspective transformation. Adult Education Quarterly, 28(2), 100-110.
                       https://doi.org/10.1177/074171367802800202                                                      Trbušić, H. (2014). Engineering in the community: Critical consciousness and engineering
                                                                                                                            education. Interdisciplinary Description of Complex Systems, 12(2), 108–118.
                 Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. Public
                                                                                                                            https://doi.org/10.7906/indecs.12.2.1
                       Affairs.
                 Nanda, V., Dooley, S., Singla, S., Feizi, S., & Dickerson, J. P. (2021). Fairness through robustness: Wallach, H. (2018). Computational social science ≠ computer science + social data.
                                                                                                                            Communications of the ACM, 61(3), 42–44. https://doi.org/10.1145/3132698
                       Investigating robustness disparity in deep learning. In Proceedings of the 2021 ACM
                       Conference on Fairness, Accountability, and Transparency (FAccT ’21) (pp. 466–477).             Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning fair representations.
                       https://doi.org/10.1145/3442188.3445910                                                              Proceedings of the 30th International Conference on Machine Learning, 28(3), 325-333.
                                                                                                                            https://proceedings.mlr.press/v28/zemel13.html
                 The technical view of ethics: An overview and critique                                            31 of 30                                                Slides: https://MominMalik.com/cdep2022.pdf
You can also read