Automated Decision-Making Systems in the Public Sector - An Impact Assessment Tool for Public Authorities

 
CONTINUE READING
Automated Decision-Making Systems
in the Public Sector
An Impact Assessment Tool for Public Authorities

Michele Loi, in collaboration with
Anna Mätzener, Angela Müller, and Matthias Spielkamp
Automated Decision-Making Systems in
                                         the Public Sector: An Impact Assessment
                                         Tool for Public Authorities

/ Executive Summary and Policy Recommendations

When deploying automated decision-making systems                         / Policy Recommendations
(ADMS) in the public sector, individual and societal
trust in public authorities should be the ultimate                       —— When ADMS are deployed in the public sector,
benchmarks. Given the unique context in which pub-                             authorities should be obliged to systematically
lic authorities act, the use of ADMS must be accompa-                          evaluate and make transparent the poten-
nied by a systematic assessment of potential ethical                           tial risks of their use. These risks cannot be
implications, ensuring transparency and accounta-                              determined in a generalized manner but only
bility vis-à-vis those affected. In collaboration with                         through a case-by-case analysis. Thus, it should
the University of Basel, AlgorithmWatch conducted                              be mandatory for public authorities to conduct
a study on the use of artificial intelligence in public                        an impact assessment prior to and during the
administration on behalf of the Canton of Zurich,                              deployment of any ADMS.
Switzerland.1 As a result of this study, we developed
a concrete and practicable impact assessment tool,                       —— The      two-stage impact assessment procedure
ready to be implemented for the evaluation of spe-                             developed here provides a practicable and
cific ADMS by public authorities at different levels.                          ready-to-use tool to generate transparency on
                                                                               these potential risks, based on seven underlying
In the present report, we first outline the ethical foun-                      ethical principles. It enables a triage of ADMS,
dation of our approach, from which specific questions                          indicating whether a specific system must be
for the evaluation of ADMS are derived. If the goal is                         subject to additional transparency requirements.
to uphold the principles of harm prevention, auton-                            If this is the case, public authorities must ensure
omy, justice and fairness, and beneficence, there is                           that a comprehensive transparency report is
no way around guaranteeing transparency, control,                              provided, allowing for the evaluation of the sys-
and accountability. The latter are not goals in them-                          tem and its deployment over its entire life cycle.
selves but unavoidable means toward guaranteeing                               Transparency does not by itself ensure conform-
an ethical use of ADMS in the public sector and for                            ity with ethical requirements, but it is a necessary
ensuring accountability. Based on that, we introduce                           condition for achieving such conformity.
a two-stage impact assessment procedure. It enables
a triage of ADMS, indicating whether a system must                       —— There should be a public register for all ADMS
be subject to additional transparency requirements.                            deployed in the public sector. Next to informa-
If this is the case, public authorities must provide a                         tion on its purpose, the underlying model, and
comprehensive transparency report. Lastly, we illus-                           the developers and deployers of the system, this
trate the use of the impact assessment tool by way of                          register should contain the results of the impact
a fictional example.                                                           assessment—that is, if applicable, the transpar-
                                                                               ency report. In cases where there are legitimate
                                                                               reasons for restricting access to the transpar-
                                                                               ency report, information should be provided on
                                                                               the fora to which it is disclosed.

1	Braun Binder, Nadja, et al., Einsatz Künstlicher Intelligenz in der Verwaltung: rechtliche und ethische Fragen, (Kanton Zürich; Universität
   Basel: February 2021). The final report [in German] is open access and available here: https://www.zh.ch/content/dam/zhweb/bilder-
   dokumente/themen/politik-staat/kanton/digitale-verwaltung-und-e-government/projekte_digitale_transformation/ki_einsatz_in_der_
   verwaltung_2021.pdf.

                                                                                                                                     Page 2
Automated Decision-Making Systems in
                                                  the Public Sector: An Impact Assessment
                                                  Tool for Public Authorities

Contents

1 Introduction............................................................................................................................................................. 4

2 Ethical Framework................................................................................................................................................ 7
     2.1 Approach and Choice of Principles............................................................................................................. 7

     2.2 Seven Principles............................................................................................................................................... 9

3 Checklists................................................................................................................................................................23
     3.1 Introduction.................................................................................................................................................... 23

     3.2 Checklist 1: Transparency Triage for ADMS........................................................................................... 24

     3.3 Checklist 2: Transparency Report............................................................................................................. 26

4 Example of the Use of Checklists 1 and 2: Swiss COMPAS...............................................................29
     4.1 Introductory Remarks.................................................................................................................................. 29

     4.2 Checklists 1 and 2.......................................................................................................................................... 30

     4.3 Transparency Report.................................................................................................................................... 42

5 Flow Chart...............................................................................................................................................................47

                                                                                                                                                                     Page 3
Automated Decision-Making Systems in
                                        the Public Sector: An Impact Assessment
                                        Tool for Public Authorities

1 Introduction

The use of ADMS has arrived in the public sector.                       efficiently allocate different tasks to the correspond-
As a result of the most comprehensive mapping                           ing public agency. Next to the above, ADMS are
of ADMS use in sixteen European countries in                            used for processing requests for social benefits, for
our Automating Society Report, we now speak of                          detecting risks of welfare fraud, for profiling unem-
the “automated society”.2 In the years to come,                         ployed people, for predictive policing purposes by law
the automation of decision-making procedures                            enforcement authorities, or for assessing recidivism
and services in public administrations is likely                        risks of parolees.
to increase exponentially. Citizens demand user-
friendly services that are simple, easily accessi-                      Doubtlessly, ADMS offer great potential for public
ble, and available 24/7. Administrations regard                         administrations. At the same time, it comes with
automation as a chance to accelerate efficiency,                        substantial risks—especially if such systems are not
facilitate processes, and expedite mass and rou-                        introduced and deployed in a careful manner. While
tine services. However, given the unique context                        these risks are certainly not limited to the public
in which public authorities act, the deployment                         sector (but often mirror similar risks that pertain
of ADMS should be accompanied by a system-                              to the private sphere), the latter is of a special kind.
atic evaluation of potential ethical implications,                      Actions by public authorities are not only subject to
ensuring transparency and accountability vis-                           different and unique legal preconditions, such as to
à-vis those affected. In collaboration with the                         the principles of legality or compliance with funda-
University of Basel, AlgorithmWatch conducted                           mental rights. They also take place in a unique setting
a study on the use of artificial intelligence in                        where individuals do not have the freedom to choose
public administration on behalf of the Canton                           the provider of services but are inescapably subject
of Zurich, Switzerland. As a result of this study,                      to a particular administration, according to rules of
we developed a concrete and practicable impact                          jurisdictional authority. In addition, decisions by
assessment tool, ready to be implemented for the                        public authorities often have consequential effects
evaluation of specific ADMS by public authorities                       on individuals. When deploying ADMS in the public
at different levels.3                                                   sector, this context must be considered—individual
                                                                        and societal trust in public authorities should be
Chatbots facilitate the provision of information to                     the ultimate benchmarks for the automation of
citizens, the electronic processing of tax declarations                 administrative procedures.
speeds up assessments while saving resources, and
the automated processing of citizen complaints can

2	Chiusi, Fabio; Fischer, Sarah; Kayser-Bril, Nicolas; Spielkamp, Matthias (eds.), Automating Society Report 2020, (AlgorithmWatch;
   Bertelsmann Stiftung: October 2020), https://automatingsociety.algorithmwatch.org.
3	For the full study, see Braun Binder, Nadja, et al., Einsatz Künstlicher Intelligenz in der Verwaltung: rechtliche und ethische Fragen,
   (Kanton Zürich; Universität Basel: February 2021). The final report [in German] is open access and available here: https://www.
   zh.ch/content/dam/zhweb/bilder-dokumente/themen/politik-staat/kanton/digitale-verwaltung-und-e-government/projekte_digitale_
   transformation/ki_einsatz_in_der_verwaltung_2021.pdf.

                                                                                                                                   Page 4
Automated Decision-Making Systems in
                                 the Public Sector: An Impact Assessment
                                 Tool for Public Authorities

As a result, it is of prior importance not only to watch   that ex ante risk assessments could be provided for
and unpack the use of ADMS in the public sector but        certain categories of ADMS in a generalized manner.
also to contribute to a foundational ethically ori-        Rather, every ADMS must be subject to a case-by-case
ented reflection on the risks of this use, its impacts     impact assessment.
on individuals and society, and the requirements
systems have to comply with. Yet, as worthy as this        The results of this impact assessment should be dis-
reflection is at the meta-level, it does not suffice.      closed within a public register. Such registers should
                                                           be mandatory for all ADMS used in the public sector
In light of the consequential effects ADMS can have,       and contain, in addition, information on the purpose
public authorities should be obliged to assess the         of the system, its underlying model, and the actors
potential risks of any system they plan to deploy in       involved in developing and deploying it. Such regis-
a comprehensive, transparent, and systematic man-          ters also enable independent external reviews by giv-
ner. Hence, in order to make a difference in practice,     ing external researchers (academia, civil society, and
ethical reflection must be translated into practi-         journalists) access to relevant data on ADMS. This
cable and ready-to-use tools, providing authorities        contributes to public interest research and thereby to
with the means for conducting such an analysis. In         an evidence-based debate on the automation of the
what follows, we provide a practical and concrete          public sector—a prerequisite for guaranteeing demo-
tool, enabling impact assessment of ADMS on a              cratic control and accountability.
case-by-case basis and throughout their entire
life cycles, ready for implementation by public            However, transparency does not always equal full
authorities. With this, we aim to contribute not only      public disclosure. In specific contexts, legitimate
to the debate on the ethical use of ADMS but also to       interests might speak against giving the public full
the provision of practical solutions—and thus, ulti-       access to transparency reports (such as, for example,
mately, to make a difference on the ground, that is, to    the protection of personal data). However, in such
ensure that these systems are used to the benefit of       cases, transparency must be provided vis-à-vis spe-
society as a whole, and not to its detriment.              cific fora, e.g., the appropriate oversight institution.
                                                           This, in turn, must be publicly communicated.
If the goal is to evaluate the risks, stakeholders need
transparency. Without any light shed on the func-          While ethical reflection on ADMS must eventually be
tioning of concrete ADMS, their purpose, the actors        translated into practicable tools, it is also true that
involved, and their potentially harmful effects, they      the latter cannot be developed in a vacuum. They
will remain black boxes—to the administration and          need to be based on coherent foundational analy­
its personnel, to those affected, and to society as a      ses and on substantial principles, which must be
whole. While transparency does not guarantee the           made explicit. While these two levels are sometimes
ethical conformity of a system, it is a necessary          separated in the current debate on ADMS, we regard
precondition for ensuring such conformity and              it as critical that they go hand in hand. Thus, before
for enabling accountability.                               presenting the checklists mentioned above, the next
                                                           section will justify and explain the ethical foundation
Thus, in what follows, we introduce a two-stage            of our approach, that is the seven theoretical princi-
evaluation procedure, which can be used in order           ples this practical toolbox is built upon.
to detect the ethically relevant implications of a par-
ticular ADM system. It enables a triage of such sys-       In section 3, the questions that flow from these prin-
tems, indicating whether a system must be subject          ciples are developed into two separate checklists. The
to additional transparency requirements. If this is the    triage checklist for ADMS (checklist 1) helps to deter-
case, public authorities must ensure the provision         mine which ethical transparency issues need to be
of a comprehensive transparency report. Thus,              addressed and documented prior to and during the
the approach developed here departs from the idea          implementation of the ADM project. The transparency

                                                                                                            Page 5
Automated Decision-Making Systems in
                                  the Public Sector: An Impact Assessment
                                  Tool for Public Authorities

report checklist (checklist 2) then serves as a guide for
compiling a detailed transparency report.

By means of the fictional example of a Swiss COMPAS
risk evaluation system for criminal offenders, the
application of the evaluation procedure is illustrated
in section 4. Finally, a flow chart provides an overview
of the approach as a whole (section 5).

                                                                            Page 6
Automated Decision-Making Systems in
                                          the Public Sector: An Impact Assessment
                                          Tool for Public Authorities

2 Ethical Framework

2.1 Approach and Choice of                                                 dignity.7 We, therefore, pick up one of the many pos-
Principles                                                                 sible formulations of a subset of these apparently
                                                                           common principles, while assuming that nothing of
The choice of the principles on which this approach                        fundamental moral or political importance follows
is built flows from a review of a broad range of ethics                    from this somewhat arbitrary choice.
guidelines, conducted as part of a larger study by
AlgorithmWatch on the ethical use of ADMS in public                        The most comprehensive analysis of AI guidelines
administration.4                                                           published so far8 includes a list of eleven distinct
                                                                           principles, which constitute the common denomina-
We will base our approach on the “Ethics Guidelines                        tor of the guidelines under scrutiny. Of these eleven
for Trustworthy AI” drafted by the Independent                             principles, one can distinguish ethical principles
High-Level Expert Group on Artificial Intelligence,                        that overlap with the EU Ethics Guidelines (non-ma-
which was set up by the European Commission in                             leficence or harm prevention, justice and impartiality
2019 (these are referred to as “EU Ethics Guidelines”                      [fairness], and freedom and autonomy); ethical princi-
in what follows).5 However, these guidelines are a                         ples not included in the EU Ethics Guidelines as such
rough simplification of the values introduced by other                     (beneficence and respect for dignity); and instrumen-
guidelines and thus need to be complemented. We                            tal, technical or procedural requirements included
do not discuss the fundamental moral justifications                        in the EU Ethics Guidelines under the heading of
of these principles; we assume, in the light of the                        “Key Requirements of Trustworthy AI” (transparency
significant convergence found in the practical content                     and responsibility/accountability). A further principle
of many guidelines, that practical principles—regu-                        (explicability) is included in the EU Ethics Guidelines
latory principles—are all supported by overlapping                         but (convincingly) considered an enabler of other
fundamental moral concerns.6 That is to say, different                     principles, such as the implementation require-
moral doctrines support the same regulatory princi-                        ments.9 The ethical principle of beneficence is not
ples, for example, the principle of ensuring human                         included in the EU Ethics Guidelines, but it forms
control of technology can be justified by appealing                        part of both the summary referred to above10 and
to autonomy or to the principle of respect of human

4	For the full review of ethics guidelines, see Braun Binder, et al., Einsatz Künstlicher Intelligenz in der Verwaltung, 65-66.
5	Independent High-Level Expert Group on Artificial Intelligence set up by the European Commission, “Ethics Guidelines for
   Trustworthy AI” (European Commission – Digital Single Market, April 8, 2019), https://ec.europa.eu/digital-single-market/en/news/
   ethics-guidelines-trustworthy-ai.
6     John Rawls, A Theory of Justice, 2nd ed. (Cambridge, MA: Harvard University Press, 1999).
      

7	John Rawls, Political Liberalism, Expanded ed. (New York: Columbia University Press, 1996).
8	Anna Jobin, Marcello Ienca, and Effy Vayena, “The Global Landscape of AI Ethics Guidelines,” Nature Machine Intelligence 1, no. 9
   (September 2019): 389–99, https://doi.org/10.1038/s42256-019-0088-2.
9	Luciano Floridi and Josh Cowls, “A Unified Framework of Five Principles for AI in Society,” Harvard Data Science Review 1, no. 1 (July
   2019), https://doi.org/10.1162/99608f92.8cd550d1.
10	Jobin, et al., “The Global Landscape of AI Ethics Guidelines.”

                                                                                                                                   Page 7
Automated Decision-Making Systems in
                                          the Public Sector: An Impact Assessment
                                          Tool for Public Authorities

                  Ethical Principles                                                     Instrumental Principles

                                            Justice /
         Harm Prevention                                                                   Control                Transparency
                                            Fairness

             Autonomy                     Beneficence                                                  Accountability

                       Principles of
                    Biomedical Ethics
                                                                                        Human Rights
                                                                                         (Including
                                                                                           Privacy)            Explicability
                                   Respect for Dignity

                                                                                          Solidarity

             Individual and Societal Well-Being                                                                         Trust

                                                                                       Sustainability and Environmental Well-Being

               Valuable in themselves                                                           Valued as means

Fig. 1, the most important principles and values in ethical guidelines on AI (illustration by authors). In red, the principles
considered in this framework; in blue, values and principles appearing in other guidelines. The arrow means “is required
for”. For simplicity, we include environmental well-being as a means to enable humans to flourish. Arguably, respect for
the environment can also be regarded as a moral end in itself, quite apart from its impact on people. This is controversial.

the most widely used framework of ethical princi-                           will refer to control, transparency, and accountability
ples, those of biomedical ethics.11                                         as “instrumental principles”.12

Further principles found in many guidelines can be                          The analysis of eighteen other documents about the
classified as belonging to three macro-categories:                          use of ADM in the public sector13 reveals further eth-
control, transparency, and accountability. Here, we                         ical and instrumental principles compatible with this
                                                                            structure.

11	Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 6. ed. (New York: Oxford University Press, 2008).
12	Michele Loi, Christoph Heitz, and Markus Christen, “A Comparative Assessment and Synthesis of Twenty Ethics Codes on AI and
    Big Data,” in 2020 7th Swiss Conference on Data Science (SDS), 2020, 41–46, https://doi.org/10.1109/SDS49233.2020.00015;
    Michele Loi, “People Analytics Must Benefit the People. An Ethical Analysis of Data-Driven Algorithmic Systems in Human Resources
    Management” (AlgorithmWatch, March 2, 2020), https://algorithmwatch.org/wp-content/uploads/2020/03/AlgorithmWatch_AutoHR_
    Study_Ethics_Loi_2020.pdf. A similar value framework, including three of the four principles from the EU Ethics Guidelines (non-
    maleficence or harm prevention, justice and impartiality [fairness], and freedom and autonomy) plus beneficence, plus the three key
    procedural requirements of control, transparency, and accountability are used in the AlgorithmWatch report on the ethics of human
    analytics and algorithms in HR.
13	Braun Binder, et. al., Einsatz Künstlicher Intelligenz in der Verwaltung, 65-66.

                                                                                                                                  Page 8
Automated Decision-Making Systems in
                                          the Public Sector: An Impact Assessment
                                          Tool for Public Authorities

As a result of this review, we focus on a framework of                      2.2 Seven Principles
seven values:
                                                                            2.2.1 Ethical Principles
—— four ethical principles, namely respect for human
      autonomy, prevention of harm, justice or impar-                       / Harm Prevention
      tiality [fairness], and beneficence
                                                                            This is the principle of “Do no harm”. Civilian ADMS
—— three    instrumental principles, namely control,                        must not be designed so as to harm or deceive peo-
      transparency, and accountability, summarizing                         ple and should be implemented in ways that mini-
      technical, organizational, and prudential require-                    mize any negative outcomes.15 First and foremost,
      ments14                                                               and in its simplest form, avoiding harm means not
                                                                            inflicting pain and discomfort on people. In a broader
In what follows, we present the ethical framework                           sense, harm includes violations of privacy16 (ques-
that forms the basis of our practical recommenda-                           tion 1.1) and violations of rights (questions 1.4 and 1.5),
tions and checklists in more detail. For each of the                        including human rights.17 Avoiding harm is associated
seven values considered, we provide references to                           with promoting safety, security,18 and sustainability,
analogous concepts in existing guidelines for public                        and more broadly with building technical and insti-
sector applications of ADMS. Moreover, references                           tutional safeguards.19 These are often mentioned as
to the checklist items of checklists 1 and 2, which are                     elements of trust or trustworthy technology.20 Harm
directly inspired by this analysis, are put into italics                    avoidance also involves preventing and managing
(e.g., question 1.10 or question 2.8).                                      risk, which often requires ensuring that systems are
                                                                            reliable21 and predictable.22 Harm prevention includes
                                                                            ensuring broad environmental and social sustain-
                                                                            ability (question 1.10),23 because non-sustainable
                                                                            practices will ultimately cause human harm and run
                                                                            counter to own interests. Moreover, the sustainability

14	Explicability (or explainability) is not considered an independent principle but a component of other instrumental principles, cf.
    Jobin, et al., “The Global Landscape of AI Ethics Guidelines”; Loi, et al., , “A Comparative Assessment and Synthesis of Twenty Ethics
    Codes on AI and Big Data.”
15	D Dawson et al., “Artificial Intelligence: Australia’s Ethics Framework,” (Data61 CSIRO, 2019), https://consult.industry.gov.au/strategic-
    policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf, cf.
    “Principle 2”.
16	Dawson et al., “Artificial Intelligence: Australia’s Ethics Framework,” cf. “Principle 4”.
17	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems” (April 8, 2020), https://
    rm.coe.int/09000016809e1154, cf. Principle 3.2; Government of Canada and Treasury Board Secretariat, “Directive on Automated
    Decision-Making” (February 5, 2019), https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592, cf. “Appendix B” (impact assessment
    levels).
18	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems”; Schweizerische
    Eidgenossenschaft – Der Bundesrat, “Leitlinien ‚Künstliche Intelligenz‘ für den Bund. Orientierungsrahmen für den Umgang
    mit Künstlicher Intelligenz in der Bundesverwaltung,” (November 25, 2020), https://www.sbfi.admin.ch/dam/sbfi/de/
    dokumente/2020/11/leitlinie_ki.pdf.download.pdf/Leitlinien%20K%C3%BCnstliche%20Intelligenz%20-%20DE.pdf, cf. “Leitlinie 5”; Jan
    Engelmann and Michael Puntschuh, “KI im Behördeneinsatz: Erfahrungen und Empfehlungen” (December 2020), https://oeffentliche-
    it.de/documents/10181/14412/KI+im+Beh%C3%B6rdeneinsatz+-+Erfahrungen+und+Empfehlungen, cf. “5. Sicherheit.”
19	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems”; Engelmann and
    Puntschuh, “KI im Behördeneinsatz,” cf. “5. Sicherheit”.
20	David Leslie, Understanding Artificial Intelligence Ethics and Safety (London: The Alan Turing Institute, 2019), 6, https://doi.org/10.5281/
    zenodo.3240529; Independent High-Level Expert Group on Artificial Intelligence set up by the European Commission, “Ethics
    Guidelines for Trustworthy AI”.
21	Government of New Zealand, “Algorithm Charter for Aotearoa New Zealand,” (November 20, 2020), https://data.govt.nz/use-data/
    data-ethics/government-algorithm-transparency-and-accountability/algorithm-charter.
22	Guidelines and laws that require a risk assessment may include: “the rights of individuals or communities, the health or well-being
    of individuals or communities, the economic interests of individuals, entities, or communities, the ongoing sustainability of an
    ecosystem,” Government of Canada and Treasury Board Secretariat, “Directive on Automated Decision-Making”.
23	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Principle 6.3”.

                                                                                                                                        Page 9
Automated Decision-Making Systems in
                                          the Public Sector: An Impact Assessment
                                          Tool for Public Authorities

of a socio-technical system grounded in reliable tech-                    regard to the criteria that should be relevant for the
nology may be considered here.24 This also covers                         assignment of social welfare benefits. Hence, the
ensuring cybersecurity (question 1.2), including the                      decision is unjust.
protection of the confidentiality, integrity, and availa-
bility of information.25                                                  However, this is compatible with using gender, in
                                                                          many situations, as a proxy for other relevant aspects.
/ Justice and Fairness                                                    Gender information may justifiably be “grounds” for
                                                                          the decision in a model that predicts the need for
A basic principle of justice is the idea that one ought to                social services, if it is among the criteria that best pre-
treat equals equally and unequals unequally. In other                     dict who needs welfare assistance the most.26 More-
words, an action is just if it treats everyone equally                    over, gender information (and other demographic
who is equal with regard to the criterion that is mor-                    group information) may be considered and processed
ally relevant in a particular situation. Thus, students                   when it is necessary to correct and prevent inequal-
must get equal grades for equal performance in an                         ities that reflect socially established biases against a
exam (performance being the morally relevant crite-                       specific gender or other demographic groups, given
rion in this situation). They must not get equal grades                   that this does not conflict with the requirements of
for equal height (as height is morally irrelevant in this                 positive law. Thus, the evaluation of the fairness of an
situation). Hence, justice refers to equality—but to a                    algorithm is morally complex and does not boil down
qualified notion of equality.                                             to ignoring certain variables.

Applied to the present issue, this means that if                          The ethical goal of justice and fairness must be safe-
ADMS treat people unequally who are actually equal                        guarded by means of six ethical principles. The first
with regard to the criterion that should be morally                       is protection against unjust discrimination and unjus-
relevant for the particular decision the system con-                      tifiable bias.27 This dimension of fairness applies to
tributes to—and the system does so on the basis of                        different elements of the data pipeline:
criteria in which they are unequal but that should not
be considered relevant for this particular decision—                      If the ADMS processes social or demographic data, it
this violates basic principles of justice. For example,                   should be designed so as to meet a minimum level of
if an ADMS deployed in the context of social welfare                      discriminatory non-harm. To achieve this, one should:
comes to a pattern of different decisions because of
gender (which should not be morally relevant for this                     —— use only fair and equitable datasets (data fair-
particular decision), as such, it treats male and female                        ness),
persons unequally, even though they are equal with

24	Government Digital Service and Office for Artificial Intelligence, UK, “A Guide to Using Artificial Intelligence in the Public Sector/
    Understanding Artificial Intelligence Ethics and Safety,” (June 10, 2019), https://www.gov.uk/guidance/understanding-artificial-
    intelligence-ethics-and-safety.
25	Council of Europe, “European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment” (April
    8, 2020), https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c, cf. “Principle of Quality and Security”;
    Dawson et al., “Artificial Intelligence: Australia’s Ethics Framework,” cf. “Principle 4”; Government of Canada and Treasury Board
    Secretariat, “Directive on Automated Decision-Making,” cf. “6.3.7”; Engelmann and Puntschuh, “KI im Behördeneinsatz,” cf. “6.
    Datenhaltung und -qualität”.
26	There is an important ambiguity in ordinary language when people refer to a feature, e.g., gender, as the “grounds” of a decision.
    Grounds can be epistemic or moral. If an algorithm uses gender to identify the neediest people, gender is the grounds of the
    decision only from the epistemic point of view: it is, at most, defeasible evidence that the individual is needy. It is not treated as
    being in itself a distinct moral reason to provide welfare assistance.
27	Council of Europe, “European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment,” cf.
    “Principle of Non-discrimination”; Dawson et al., “Artificial Intelligence: Australia’s Ethics Framework,” cf. “Principle 5”; Government of
    New Zealand, “Algorithm Charter for Aotearoa New Zealand,” cf. “Fairness and Justice”; Leslie, “Understanding Artificial Intelligence
    Ethics and Safety”; Government Digital Service and Office for Artificial Intelligence, UK, “A Guide to Using Artificial Intelligence in the
    Public Sector”; Automated Decision Systems Task Force, “New York City Automated Decision Systems Task Force Report” (November
    2019), https://www1.nyc.gov/assets/adstaskforce/downloads/pdf/ADS-Report-11192019.pdf.

                                                                                                                                      Page 10
Automated Decision-Making Systems in
                                          the Public Sector: An Impact Assessment
                                          Tool for Public Authorities

—— include    reasonable features, processes, and                          the gender, or anything that correlates with it, well
     analytical structures in the model architecture                       enough, will help the algorithm to make fewer errors
     (design fairness),                                                    in identifying future high-income earners capable of
                                                                           sustaining a large mortgage. Regrettably, such infer-
—— prevent the system from having any discrimina-                          ences improve algorithmic accuracy and efficiency (as
     tory impact (outcome fairness),                                       measured by the benchmark of past technology) in
                                                                           the aggregate, even if they may result in a large num-
—— implement    the system in an unbiased way                              ber of wrong predictions about individual cases. They
     (implementation fairness).28                                          may still improve efficiency even if society is changing
                                                                           and the income level of women is increasing and, as
Avoiding all biases may be impossible. Biases may be                       a result, the algorithm systematically overestimates
a result of inadequate data for training the model,29                      the financial advantage of being male (which may
but finding representative data for training purposes                      remain unknown unless someone collects the data
may be hard or even impossible. Among other                                necessary to determine it). This is why all algorithmic
reasons, this is because of privacy protections and                        inferences based on techniques of statistical learn-
(paradoxically) because of anti-discrimination laws,                       ing from data about humans in an already unequal
which may put limits on collecting and processing                          society are potentially morally problematic in terms
data about, for example, race, gender, ethnic group,                       of bias and indirect discrimination. In the checklist,
or religious status.                                                       these algorithms will thus require special attention
                                                                           (question 1.13).
But even when data sets are fully adequate in terms
of representing society in all its subdivisions and                        Moreover, the dimensions of outcome fairness and
facets, decisions based on statistical generalizations                     application fairness are especially important when
may still be considered unjustly biased. For example,                      algorithms influence competitions in the political and
if they favor individuals with features that correspond                    the economic sphere (questions 1.11 and 1.12). Com-
to those from more privileged backgrounds (if this                         petitive processes are hugely important because our
advantage for one group is not an unavoidable col-                         society relies on fair competition in the market and
lateral effect of other necessary and morally justified                    politics as the procedure to decide how to distribute
features of the algorithm). This is especially the case if                 social resources and opportunities in society in a way
data are representative of different sub-populations:                      that can be considered procedurally just overall.30
It raises the probability that some data (or combina-
tions thereof) act as a proxy of race, gender, religion,                   The second dimension of fairness requirements
etc. Even if data that explicitly refers to categories                     reaches beyond ethics and refers to legality, ensuring
protected by anti-discrimination laws are absent from                      that algorithms do not violate existing legal norms,
the mix, any efficient machine learning algorithm will                     including legal rights.31
typically learn to recognize proxies.
                                                                           Third, justice and fairness include respect for all
This is all the more predictable the more society is                       rights, whether recognized in positive law or not,
unequal. For example, in a society where most men                          including human rights and moral rights.32
have higher incomes than most women, knowing

28	Government Digital Service and Office for Artificial Intelligence, UK, “A Guide to Using Artificial Intelligence in the Public Sector”.
29	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Testing on Personal
    Data”.
30	John Rawls, A Theory of Justice, 2nd ed. (Cambridge, MA: Harvard University Press, 1999).
31	Dawson et al., “Artificial Intelligence: Australia’s Ethics Framework,” cf. “Principle 3”; Government of New Zealand, “Algorithm Charter
    for Aotearoa New Zealand,” cf. “Fairness and Justice”.
32	Government of New Zealand, “Algorithm Charter for Aotearoa New Zealand,” cf. “Fairness and Justice”.

                                                                                                                                       Page 11
Automated Decision-Making Systems in
                                          the Public Sector: An Impact Assessment
                                          Tool for Public Authorities

Fourth, it includes the values of equality,33 inclusion,                   / Autonomy
and solidarity (although this set of values may be a
more contested and highly contextually variable). The                      Promoting autonomy means enabling individuals to
requirement of inclusion, covering elements of public                      make decisions about their lives on their own, which
participation, appears to be more strongly affirmed                        are not imposed upon them or manipulated by oth-
in guidelines addressing the public sector34 compared                      ers. It requires having a range of genuine options
to guidelines addressing the scientific community or                       available from which one can choose. A decision taken
private companies, which tend to have been pub-                            on the basis of inadequate information or deception
lished earlier.                                                            is not considered autonomous. The value of human
                                                                           autonomy is mostly related to the procedural require-
Fifth, the principle of justice and fairness includes                      ment of transparency, which will be addressed in the
compensation and remedies (questions 1.7, 1.8, 1.9) in                     second part of the present framework. Transparency
case the violation of a right can be proven.35                             involves providing sufficient, accurate information
                                                                           and avoiding deception of individuals interacting with
Sixth, it refers to the issue of procedural regularity.                    algorithms, thereby enabling autonomous choices.
Some AI-based ADMS continuously update their
models based on new data in order to improve in                            The most familiar (and perhaps least valued) exercise
achieving whatever goal they have been programmed                          of decisional autonomy in the digital realm pertains
to achieve.36 One of the collateral effects of such con-                   to personal data. Individuals must be informed about
tinuously updating models is that they can produce                         what can happen to their personal data so that they
different outputs from the same inputs, if the same                        can give their consent for their data to be used and
input is processed before or after the update of the                       processed in one way or another,38 especially when it
model. As a result, two individuals with the same                          comes to experimental technologies.39
characteristics (inputs) may receive a different deci-
sion (output), depending on when the algorithm pro-                        Another aspect of autonomy is the “ability to ques-
cesses their personal data (before or after a model                        tion and change unfair, biased or discriminatory
update).37 This may violate the rights of individuals to                   systems”,40 which citizens only have if they obtain
equal treatment by the law (question 1.14).                                “understandable and accurate information about the
                                                                           technological, algorithmic and artificial intelligence

33	Government of New Zealand, “Algorithm Charter for Aotearoa New Zealand,” cf. “Fairness and Justice”.
34	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems.”; Council of Europe,
    “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Barriers, Advancement of Public
    Benefits”; Dataethical Thinkdotank, “White Paper: Data Ethics in Public Procurement,” https://dataethics.eu/publicprocurement/,
    cf. “Universal Design”; Cities for Digital Rights, “Declaration of Cities Coalition for Digital Rights” (July 15, 2020), https://
    citiesfordigitalrights.org/declaration, cf. “Participatory Democracy, Diversity and Inclusion“; Schweizerische Eidgenossenschaft – Der
    Bundesrat, “Leitlinien‚ ‘Künstliche Intelligenz‘ für den Bund; cf. Leitlinie 7.
35	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Effective Remedies”;
    Government of Canada and Treasury Board Secretariat, “Directive on Automated Decision-Making,” cf. “Principle 6.4”.
36   Note that such adaptability could mitigate the fairness problem highlighted above, namely, that of predictions reflecting obsolete
     

     social patterns from the past, which are needlessly harmful to social groups whose conditions are continuously improving. But
     continuous learning and model optimization raises its own fairness concerns. This is not surprising: if fairness involves distinct
     dimensions, it is possible that there will be dilemmatic choices when the designer must choose a solution that may be better with
     regards to one fairness aspect, but not with regards to a different one, and no solution exists that is optimal for both.
37	Joshua A. Kroll et al., “Accountable Algorithms,” University of Pennsylvania Law Review 165, no. 3 (2017): 633; Michele Loi, Andrea
    Ferrario, and Eleonora Viganò, “Transparency as Design Publicity: Explaining and Justifying Inscrutable Algorithms,” Ethics and
    Information Technology (2020), https://doi.org/10.1007/s10676-020-09564-w.
38	Cities for Digital Rights, “Declaration of Cities Coalition for Digital Rights,” cf. “Privacy, Data Protection and Security”; World Economic
    Forum, “AI Government Procurement Guidelines” (June 2020), http://www3.weforum.org/docs/WEF_AI_Procurement_in_a_Box_AI_
    Government_Procurement_Guidelines_2020.pdf.
39	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Computational
    Experimentation”.
40	Cities for Digital Rights, “Declaration of Cities Coalition for Digital Rights,” cf. “Transparency, Accountability, and Non-discrimination
    of Data, Content and Algorithms”.

                                                                                                                                       Page 12
Automated Decision-Making Systems in
                                          the Public Sector: An Impact Assessment
                                          Tool for Public Authorities

systems that impact their lives”.41 Contestation can be                     such rights (which are, typically, human rights that
considered both as valuable in itself as an element                         are constitutionally protected in democracies) are
of human autonomy and as instrumentally valuable,                           based on the idea of protecting human autonomy.
as a form of control over algorithms and as a way to                        For example, negative rights such as those to free-
promote accountability (the latter aspect will be con-                      dom of expression or freedom of religion protect
sidered in the second part of the present framework).                       individual autonomy in the public and political sphere
                                                                            as well as in the sphere of individual and collective
A further aspect of autonomy is the possibility to                          expression. Positive rights such as the right to health-
choose which digital services to use, or to avoid using                     care and education protect autonomy by ensuring
them altogether,42 especially if these are experimen-                       that individuals have the means they need to live
tal services (question 1.6).43                                              autonomous lives in dignity. Hence, the design and
                                                                            implementation of ADMS in socially sustainable ways
Autonomy also has a collective dimension: it involves                       is a means of promoting autonomy.47
the ability of citizens as a group to take common deci-
sions about their collective destiny as a community.                        Finally, in the realm of AI ethics, autonomy may refer
This collective dimension of autonomy is not very                           to the idea of the AI-based ADMS being “under user
common in AI guidelines,44 but it seems very impor-                         control”.48 That is to say, algorithms should be used
tant for public digital infrastructure, such as that of                     to assist human decision-making and not to replace it
smart cities.45                                                             altogether. Most plausibly, this should be interpreted
                                                                            as a constrained principle. Philosophically speaking, it
This collective dimension of autonomy is again                              seems that automation can be (and has been) enhanc-
grounded in individual autonomy, stressing the value                        ing autonomy rather than reducing it, to the extent
of participation. It underlines the importance of being                     that automating routine tasks has been instrumental
able—as an individual—to participate in collective                          in providing humans with more time and resources
decision-making processes of the society and com-                           to deal with more intellectually challenging, creative,
munities of which one is a member.                                          or emotionally rewarding tasks.49 The problematic
                                                                            implications with regard to autonomy stem from
The ethical requirement to respect fundamental                              those forms of AI-based ADMS that are tasked with
rights46 implicitly furthers autonomy, because many                         automating the more cognitively challenging types

41	Cities for Digital Rights, “Declaration of Cities Coalition for Digital Rights,” cf. “Transparency, Accountability, and Non-discrimination
    of Data, Content and Algorithms”.
42	Cities for Digital Rights, Declaration of Cities Coalition for Digital Rights,” cf. “Open and Ethical Digital Service Standards”.
43	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Computational
    Experimentation”.
44	It was mentioned in only one guideline in the review of Jobin, et al., “The Global Landscape of AI Ethics Guidelines.”
45	“Everyone should have […] the ability collectively to engage with the city through open, participatory and transparent digital
    processes”, in Cities for Digital Rights, Declaration of Cities Coalition for Digital Rights,” cf. “Participatory Democracy, Diversity and
    Inclusion”.
46	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Principle of Respect
    for Fundamental Rights”.
47	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Human-centric and
    Sustainable Innovation”; Dataethical Thinkdotank, “White Paper,” cf. “Human Primacy”.
48	Council of Europe, “European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment,” cf.
    “Principle ‘Under User Control’”.
49	Michele Loi, “Technological Unemployment and Human Disenhancement,” Ethics and Information Technology 17, no. 65 (2015):
    1–10, https://doi.org/10.1007/s10676-015-9375-8; John Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation,”
    Philosophy & Technology 29, no. 3 (2016): 245–268, https://doi.org/10.1007/s13347-015-0211-1; John Danaher, “Will Life Be Worth
    Living in a World Without Work? Technological Unemployment and the Meaning of Life,” Science and Engineering Ethics 23, no.1
    (2016): 41–64, https://doi.org/10.1007/s11948-016-9770-5.

                                                                                                                                        Page 13
Automated Decision-Making Systems in
                                         the Public Sector: An Impact Assessment
                                         Tool for Public Authorities

of human activities, with humans taking orders from                       because innovation as such generates risks. If one
machines rather than giving orders to them.50 Thus,                       does not consider the potential benefit of innovation,
autonomy as an ethical value is potentially at stake                      there is, at first sight, no reason to take any risk. How-
in all those automation projects where ADM is meant                       ever, extreme risk avoidance is not always reasona-
to replace human judgment (question 1.15) as well as                      ble; rather, the risks inherent to innovation ought to
with regard to all those forms of automation in which                     be managed. For example, as mentioned in the last
it is unclear whether the user sufficiently understands                   section, ADMS have the potential to enhance human
the decisions of the AI-based ADMS so that the latter                     autonomy when they are used to automate processes
can assist rather than replace her autonomy (ques-                        in a way that liberates human resources that can then
tions 1.16 and 1.17). Furthermore, the public may                         be invested elsewhere. Fortunately, some guidelines
lose control—and therefore autonomy—over its pro-                         addressing the public sector mention beneficence, at
cesses and decisions if it relies on an infrastructure                    least implicitly by referring to the benefits of innova-
that is entirely owned and made inaccessible by third                     tion,53 for example: “Generates net-benefits. The AI
parties (question 1.18). This is an emerging concern,                     system must generate benefits for people that are
which was not identifiable in previous reviews51 but it                   greater than the costs.”54 Beneficence is also implicitly
is one that seems quite important in relation to ADMS                     addressed by guidelines that indicate the promotion
deployed in the public sector. 52                                         of human well-being as the overall goal of such inno-
                                                                          vation.55 Guidelines specifically addressing the public
/ Beneficence                                                             sector emphasize the idea that the benefit of ADM
                                                                          ought to be a public benefit.56
Beneficence is arguably the fundamental principle of
ethics that is least commonly found in AI guidelines.                     Beneficence is not mentioned explicitly in the check-
A plausible reason for this lack of attention is that                     list provided below. However, the concept of a net
most stakeholders take it for granted that ADM can                        benefit is implicitly referred to in our transparency
produce benefits of some sort. Efficiency is often                        deliverable (Checklist 2), in particular in items 2.1, 2.16,
mentioned as a reason to use ADMS, the idea being                         and 2.17, which ask for explanations why the intro-
that the same service can be provided to the same                         duction of ADMS is useful and what evidence one can
number of people while using less resources, or that                      provide for this. Moreover, the issue of public good
existing services can be improved (e.g., made more                        harm (question 1.10) leads (according to the proposed
accurate or furnished with extra features) while                          checklist algorithm) to a transparent stakeholder
remaining affordable and thus accessible to most.                         analysis (question 2.8) and a transparent presentation
Yet, the ability of ADM to do good is ethically fun-                      of the criticism that external stakeholders (ques-
damental: a framework for ethical ADM risks being                         tion 2.20) could raise with regard to the cost-benefit
heavily focused on harm prevention and less on the                        analysis conducted by the public administration.
generation of benefits. Such a framework will tend to
oppose the introduction of ADMS in most contexts

50	Loi, “Technological Unemployment and Human Disenhancement”; Danaher, “The Threat of Algocracy”; Danaher, “Will Life Be Worth
    Living in a World Without Work?”.
51	Jobin, et al., “The Global Landscape of AI Ethics Guidelines”.
52	Cities for Digital Rights, “Declaration of Cities Coalition for Digital Rights,” cf. “Participatory Democracy, Diversity and Inclusion” and
    “Open and Dthical Digital Service Standards”; Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of
    Algorithmic Systems,” cf. “Infrastructure”.
53	Engelmann and Puntschuh, “KI im Behördeneinsatz,” cf. „2 Interne Veränderung“ and „3. Innovationsmarker”.
54	Dawson, et al., “Artificial Intelligence: Australia’s Ethics Framework”.
55	Leslie, “Understanding Artificial Intelligence Ethics and Safety.”; Government of New Zealand, “Algorithm Charter for Aotearoa New
    Zealand,” cf. “Well-Being”.
56	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Advancement of
    Public Benefit”, “Rights-promoting Technology”.

                                                                                                                                      Page 14
Automated Decision-Making Systems in
                                        the Public Sector: An Impact Assessment
                                        Tool for Public Authorities

2.2.2 Instrumental and Prudential                                       valuable, because the goal it enables terrorists to
Principles                                                              pursue is, as such, evil. Yet, when the agent’s goal is
                                                                        ethically positive, the absence of control is ethically
/ Control                                                               bad—and not merely neutral—because good inten-
                                                                        tions without control may fail to achieve the good
The procedural requirement of control results from                      they purport to achieve, or they can even uninten-
the analysis of twenty guidelines focusing on action                    tionally have harmful effects.
types rather than action goals.57 In the guidelines
initially analyzed, it did not appear as a distinct                     It is unsurprising—because it is such a wide-purpose
requirement but rather as an element—a com-                             means—control is the single most common proce-
mon set of actions—listed equally often under the                       dural requirement found in AI ethics guidelines.
rubric of transparency as well as under the rubric
of accountability. Indeed, control includes common                      First of all, control includes the activity of document-
activities that are necessary for both transparency                     ing processes and outcomes as well as the record-
and accountability: one cannot be transparent about                     ing,58 testing,59 and monitoring60 that produces the
processes or outcomes of which one is not aware.                        data of what needs to be documented.61 Documen-
Taking responsibility in the positive, forward-looking                  tation is distinct from transparency because it can be
sense of the term (not in the sense of retrospective                    paired with internal communications, e.g., between
blameworthiness or liability) involves exercising con-                  collaborators within a data science team or an entire
trol over processes so that they lead to the intended                   company,62 which is compatible with lack of transpar-
outcomes. Control includes all efforts that are neces-                  ency to the user and the broader public.
sary to confer robustness to all goal-related activities:
from achieving the goal of the ADMS (as foreseen by                     Second, control includes measuring, assessing, eval-
the user—whatever that goal is—and realizing it) to                     uating,63 and defining standards64 and policies for all
ensuring that other ethical goals (harm avoidance,                      ADM-related processes and outcomes. It, therefore,
fairness, and autonomy) are also advanced. Control                      includes the preliminary study necessary to generate
appears morally neutral because its ethical value is                    meaningful standards and measures, that is to say,
purely instrumental and contingent. A set of practices                  those that lead to an authentic understanding and
giving terrorists better control over ADMS used to                      knowledge of the processes and outcomes being
coordinate drone attacks is useful but not ethically                    assessed, which are both products and aspects of

57	Loi, et al., “A Comparative Assessment and Synthesis of Twenty Ethics Codes on AI and Big Data”.
58	Dataethical Thinkdotank, “White Paper,” cf. “Traceability”.
59	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Testing”.
60	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Interaction of
    Systems”.
61	Schweizerische Eidgenossenschaft – Der Bundesrat., “Leitlinien ‚Künstliche Intelligenz‘ für den Bund,“ cf. „Leitlinie 3“; Engelmann and
    Puntschuh, “KI im Behördeneinsatz,” cf. „7. Wirkungsmonitoring“.
62	Engelmann and Puntschuh, “KI im Behördeneinsatz,” cf. “8. Nachvollziehbarkeit”.
63	Dillon Reisman, et al., “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” AI Now Institute
    (2018): 1–22; AI Now Institute, et al., “Using Procurement Instruments to Ensure Trustworthy AI,” n.d., https://assets.mofoprod.
    net/network/documents/Using_procurement_instruments_to_ensure_trustworthy_AI.pdf, cf. “Key Elements of a Public Agency
    Algorithmic Impact Assessment, #1”; Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of
    Algorithmic Systems,” cf. “Ongoing Review”, “Evaluation of Datasets and System Externalities”, “Testing on Personal Data”; World
    Economic Forum, “AI Government Procurement Guidelines,” cf. “Data Quality”; Automated Decision Systems Task Force, “New York
    City Automated Decision Systems Task Force Report,” cf. “Impact Determination”.
64	Council of Europe, “Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems,” cf. “Standards”;
    Engelmann and Puntschuh, “KI im Behördeneinsatz,” cf. “1. Zielorientierung”.

                                                                                                                                  Page 15
You can also read