EVALUATOR COMPETENCIES: THE SOUTH AFRICAN GOVERNMENT EXPERIENCE

 
CONTINUE READING
The Canadian Journal of Program Evaluation Vol. 28 No. 3 Pages 71–85
       ISSN 0834-1516 Copyright © 2014 Canadian Evaluation Society               71

       EVALUATOR COMPETENCIES: THE SOUTH
       AFRICAN GOVERNMENT EXPERIENCE

       Donna Podems
       Centre for Research on Evaluation, Science and Technology
       Stellenbosch University
       Ian Goldman
       Head of Evaluation and Research, South African Presidency
       Department of Performance Monitoring and Evaluation
       Christel Jacob
       Director of Evaluation & Research
       Department of Performance Monitoring and Evaluation

Abstract:   This article describes the South African government’s process
            in developing evaluator competencies. It first briefly describes
            the more general historical context in which the need for “com-
            petent” program evaluators and evaluation skills emerged in
            government, and then focuses on the Department of Performance
            Monitoring and Evaluation in the South African Presidency, its
            evolution as a new government institution responsible for M&E
            in government, and its process to develop and institutionalize
            evaluator competencies.

Résumé :    Cet article décrit le processus du gouvernement sud-africain
            pour développer les compétences d’évaluateur. L’article com-
            mence par décrire brièvement le contexte historique plus général
            où le besoin d’évaluateurs de programme « compétents » et de
            compétences d’évaluation a émergé au sein du gouvernement.
            L’article met l’accent sur le « Department of Performance Moni-
            toring and Evaluation » à la présidence sud-africaine, chargé du
            suivi et de l’évaluation du rendement au sein du gouvernement,
            son évolution en tant que nouvelle institution gouvernementale,
            et son processus d’élaboration et d’institutionnalisation des com-
            pétences d’évaluateur.

Corresponding author: Donna Podems, Centre for Research on Evaluation,
Science and Technology, Stellenbosch University, Fourth Floor, Private Bag X1,
Wilcocks Building, Matieland, Ryneveld Street, 7602 Stellenbosch, South Africa;
The Canadian Journal of Program Evaluation
 72

BRIEF HISTORY OF MONITORING AND EVALUATION IN SOUTH
AFRICA

                  To better understand why the South African government
      developed evaluator competencies, it is important to understand
      how government’s role shifted after 1994 in relation to donors, civil
      society, and its citizens. It is this history that provides the context
      for explaining the rapid growth of evaluation and the emerging need
      for competent evaluators in government. Specifically, significant
      changes took place when South Africa moved from an apartheid state
      to a democratic one in 1994. This monumental shift to a government
      that needed to be accountable to its citizens and international donors
      brought about a change in the need for, and the role of, monitoring
      and evaluation (M&E) in government.

Prior to 1994

      Significant donor funding was provided to South African nonprofit
      organizations (NPOs) before 1994. Most international donors viewed
      the apartheid regime as illegitimate and undemocratic, regarding
      NPOs as legitimate vehicles for channelling funds to apartheid vic-
      tims. These NPOs provided much-needed services, which at times
      were underpinned by political activities, to the majority of South
      Africa’s marginalized population. Being explicit about the nature of
      their activities (or achievements) was not always in the NPOs’ best
      interest, as this could potentially result in severe consequences (e.g.,
      dismantling the NPO or even prison). Therefore donors’ reporting
      requirements were often quite relaxed. Most donor organizations
      accepted an auditor report and a general annual report as sufficient
      proof of financial accountability (Podems, 2004). This period is often
      regarded as having been a “healthy” period for civil society, as staff
      had relatively easy access to donor funding with few strings attached
      (Kabane, 2011).

1994 to present

      The development and need for M&E in government resulted from
      several events that took place from 1994 to the present. One initial
      external impetus for strong M&E happened post-1994 when the
      donor funding scene changed dramatically. Most donors wanted to
      support the new legitimate government. With this shift came an
La Revue canadienne d’évaluation de programme
                                                                      73

      increased demand by donors for transparency and accountability
      (Mouton, 2010; Podems, 2004).

      The incentive for M&E in government was also driven internally. The
      South African government penned a new constitution that institu-
      tionalized the protection of the fledgling democracy by providing for
      watchdogs over government (i.e., Parliament and Chapter 9 institu-
      tions) and a new role for the Public Service Commission (PSC). Chap-
      ter 9 institutions refer to a group of organizations established under
      Chapter 9 of the South African Constitution to safeguard democracy
      (South African Constitution, 1996). These institutions were required
      to report to the legislature and provide evidence for the legislature
      to hold the executive accountable. Moreover, in an effort to improve
      performance and value for money in government, the Department of
      the Treasury developed processes of performance management and
      performance budgeting. Thus there was an internal drive to improve
      performance and government accountability through the institu-
      tional oversight provided for in the Constitution, and the need for
      government to demonstrate to its citizens the benefits of its policies
      and their implementation (Goldman et al., 2012).

      This ultimately led to the adoption of M&E by several government
      institutions. M&E units were more often “M” (monitoring) units and
      tasked to produce regular (usually quarterly) performance reports.
      M&E was basically seen as monitoring for compliance rather than
      as a means to improve programmes or determine impact and value
      for money. While the government created hundreds of M&E positions
      and various forms of M&E continued to be conducted in government,
      it was not until the mid-2000s that government began to take a more
      active role in shaping M&E in South Africa (Engela & Ajam, 2010).

      During this time, the Office of the Presidency initiated an M&E ef-
      fort that evolved from a Government-Wide M&E System (GWMES)
      into the Department of Performance Monitoring and Evaluation, also
      known as the DPME (Goldman et al., 2012). In November 2007 the
      Presidency produced “The Policy Framework for the Government
      Wide M&E System” (Presidency, 2007, p. 5) that “aimed to provide
      an integrated, encompassing framework of M&E principles, practices
      and standards to be used throughout government, and function as
      an apex-level information system which draws from the component
      systems in the framework to deliver useful M&E products for its us-
      ers.” Three domains were identified: (a) program performance infor-
      mation, (b) data quality, and (c) evaluation. The aim of the GWMES
The Canadian Journal of Program Evaluation
 74

      was to promote standardization and homogeneity in M&E practices
      across various departments within the South African government. In
      2009, the advent of a new administration brought an increased com-
      mitment to use M&E as a tool to improve government performance,
      initially with a focus on priority outcomes (Engela & Ajam, 2010).

Establishment of the Department of Performance Monitoring and
Evaluation

      In January 2010, the DPME was established within the Office of the
      Presidency. According to the Presidency website,

           ... [T]he establishment of the Department of Performance
           Monitoring and Evaluation … was a clear demonstra-
           tion of Government’s commitment to ensure that our
           performance makes meaningful impact in the lives of our
           people. The Department, in close cooperation with the
           National Planning Commission, will play an important
           role in setting expectations of improved outcomes across
           government. The Department will drive a results-orient-
           ed approach across the three spheres and other organs of
           state. The Department will review the data architecture
           of government so that the required performance informa-
           tion is generated and it will ensure that this information
           is actually used in intergovernmental planning and re-
           source allocation. (Presidency, n.d.)

      Prior to 2011 there was no standardization of evaluation in govern-
      ment; when people spoke of M&E in government departments, they
      were almost always referring to monitoring. Initially, DPME also
      focused primarily on monitoring. However, in 2011 this was extended
      to evaluation, and staff members from the DPME and other govern-
      ment departments took a study tour to Mexico and Colombia to learn
      from their experiences with evaluation in government. In that same
      year, the DPME led the development of a National Evaluation Policy
      Framework (NEPF), which was approved by Cabinet in November
      2011 (Goldman et al., 2012). The NEPF then led to the National Eval-
      uation Plan and the National Evaluation Framework (Jacob, 2013).

      To encourage evaluation use, the NEPF is guided by a utilization-
      focused approach to evaluation, with a recognition of systems and
      an emphasis on learning. It aims to provide a common language for
      evaluation in South Africa, defines a range of types of evaluation
La Revue canadienne d’évaluation de programme
                                                                      75

      studies, and states that priorities for evaluation would be defined as
      part of a rolling 3-year National Evaluation Plan (NEP) that focuses
      on the 12 priority outcomes of government. The NEPF promotes
      transparency by stating that evaluation findings will be made public.
      Within the DPME, a specific unit is responsible for the implementa-
      tion of the NEPF and the resulting evaluation system (DPME, 2011).

      The DPME promotes 6 types of evaluations that will take place in
      the government system:

           • Diagnostic – identifying the root cause of problems and the
             options that could be considered for addressing them;
           • Design – a short evaluation of the design of programs by
             M&E units within departments to ensure that designs are
             robust, ideally before implementation starts;
           • Implementation – reflecting on progress in an intervention
             and how it can be strengthened;
           • Impact – identifying the impact and attribution of interven-
             tions and how they can be strengthened;
           • Economic – the cost-effectiveness or cost-benefit of interven-
             tions; and
           • Evaluation synthesis – drawing out lessons across several
             evaluations (DPME, 2011).

      The NEPF has attempted to shift government from a compliance
      culture to one that has a greater emphasis on improvement, learning,
      and efficiency. This shift has had its challenges in terms of human
      capacity. First, the speed at which M&E units and directorates were
      established during the first decade of the century resulted in large
      numbers (estimated to be in the hundreds) of new M&E officers ap-
      pointed over a relatively short period. Most of these officers have
      no formal training in monitoring or evaluation. Second, this rapid
      institutionalization of evaluation in the government sector and the
      creation of demand for a minimum of 15 medium-scale national gov-
      ernment evaluations a year has had major implications for the need
      for evaluation expertise, both within (e.g., such as M&E officers) and
      outside (e.g., service providers) of government.

MONITORING AND EVALUATION SKILLS IN GOVERNMENT

      Various data suggest the lack of monitoring and evaluation skills in
      government. For example, in January 2009 the Presidency attempted
      to reform the Cabinet reporting system, asking departments to de-
The Canadian Journal of Program Evaluation
 76

      velop appropriate indicators to allow accurate monitoring of their
      services. One reason that was identified as to why departments
      struggled to fulfill these requests was a lack of technical knowledge
      around monitoring or evaluation (Engela & Ajam, 2010).

      The DPME conducted a survey in 2012 that also provides an indica-
      tion of the capacity limitation. This survey, conducted with all na-
      tional and provincial departments (with a 62% response rate), found
      that 32% of departments indicated that capacity for M&E is too weak
      and managers do not have the skills and understanding necessary
      to carry out their M&E functions. Further, 40% said there is not a
      strong culture of M&E in their department and that only in 36% of
      cases are there significant measurement of impacts (DPME, 2012b).
      In 2013, DPME conducted an additional study on human resource ca-
      pacity around M&E. This study found that 78% of provincial depart-
      ments felt confident in monitoring and reporting, whereas only 42%
      were confident in managing evaluations. For national departments,
      the figures were 94% and 52% respectively, reflecting the perceived
      lack of expertise in evaluation. Approximately 72% of respondents
      said they needed training in evaluation policy and practice (DPME,
      2013).

SOUTH AFRICAN MONITORING AND EVALUATION ASSOCIATION
(SAMEA)

Role in Competency Development

      Over the past few decades there has been a remarkable growth in
      the evaluation field around the world. According to Rugh and Segone
      (2013), the number of national and regional voluntary organizations
      for professional evaluators (VOPEs) has risen from 15 in the 1990s
      to more than 155 by early 2013. Growth in South Africa appears to
      follow that trend, with documented examples of individuals gather-
      ing informally to discuss evaluation in the late 1970s (Basson, 2013).
      Events to promote M&E (e.g., courses by Michael Quinn Patton and
      Donna Mertens) further encouraged a small group of interested
      South Africans to convene a meeting during the 2004 African Evalu-
      ation Association (AfrEA) conference held in Cape Town to discuss
      a way forward. This meeting eventually resulted in the launch of
      SAMEA, the South African Monitoring and Evaluation Association,
      in November 2005 (Goldman et al., 2012; samea.org.za).
La Revue canadienne d’évaluation de programme
                                                                        77

      SAMEA recognized the growing need to engage civil society and
      government in a discussion on evaluator competencies as early as
      2007. In 2009, SAMEA initiated a process to support a discussion
      around the development of evaluator competencies in South Africa.
      Part of this process included holding an Evaluator Competency Open
      Forum in Cape Town in 2010. Although the South African govern-
      ment did not show an interest at this time (e.g., the Public Service
      Commission, a partner of SAMEA at that time, declined in-kind and
      financial assistance), the Evaluator Competency Open Forum was
      attended by over 150 civil society and private sector representatives
      with a smattering of government officials who attended on their own
      behalf. The full-day open forum included presentations from interna-
      tional experts in academia, civil society, and government, as well as
      from South African civil society and academia. These presentations
      informed a heated debate; not all in attendance supported having
      defined evaluator competencies, while others were passionate about
      their development. Despite the engaging discussions and interest
      generated by the forum, SAMEA’s efforts did not gain momentum
      or result in any concrete decisions or next steps. While the reason
      for this is not clear, the turnover of the SAMEA board may have
      contributed to it, as the board members who had championed the de-
      velopment of evaluator competencies completed their terms in office.
      This article continues by focusing on the South African government’s
      process to develop evaluator competencies.

DPME INITIATES THE DEVELOPMENT OF EVALUATOR
COMPETENCIES

      As noted previously, government tasked the DPME to establish tools
      and guidance for the national evaluation system. This included de-
      velopment of guidelines and templates, a set of standards for evalu-
      ations, and a set of evaluation competencies that would underlie the
      development of training courses on monitoring and evaluation.

      In 2012 the DPME commissioned the Centre for Learning on Evalu-
      ation and Results Anglophone Africa (CLEAR AA) to support DPME
      in developing evaluation competencies and evaluation standards,
      with the first author of this article playing a leading role. The evalu-
      ation standards and evaluator competencies are still in first versions
      (DPME, 2012a). While both are being used by government, they are
      also currently being tested and revised by DPME based on their
      initial experience.
The Canadian Journal of Program Evaluation
 78

Process to develop evaluator competencies

      CLEAR AA’s first step included an intensive literature review and
      interviews with key informants within and outside of South Africa.
      One group included people who had been involved with, or had ex-
      perience of, developing competencies in different countries and or-
      ganizations around the world. This included critics, supporters, and
      “fence sitters.” The second group of individuals comprised South Afri-
      can academics who taught evaluation, private sector individuals who
      implemented evaluation training or evaluations, active members of
      SAMEA, and select government officials who conducted or commis-
      sioned evaluations. The literature review included reviewing exist-
      ing competencies that were written or translated into English and
      journal articles that supported and critiqued evaluator competencies.

      This led to a paper that informed government of the kinds of compe-
      tencies that had been developed and the advantages and pitfalls of
      having (and not having) such lists. CLEAR AA used this document to
      engage the DPME in lengthy discussions that then heavily informed
      the development of the competencies (Podems, 2012). Although the
      DPME provided their reflections and perspectives on the skills and
      knowledge that they thought were important for those who conduct
      evaluations, the majority of their feedback focused on program man-
      agers who manage evaluations.

      The next section describes key discussion and decision points be-
      tween CLEAR AA and the DPME in the development of the compe-
      tencies.

Conversations with DPME

      The DPME stated that evaluators need sufficient awareness and
      knowledge of social science research methods and evaluation ap-
      proaches to be able to make informed decisions around appropriate
      methods and evaluation designs. They also stated that evaluators
      need the ability to work collaboratively with a range of people, com-
      municate effectively, think critically, and at times negotiate, facili-
      tate, and educate. Evaluators also need to possess a certain level
      of knowledge about the government and its policies, systems, and
      context. DPME believed that these were some of the critical skills
      and knowledge areas that would lead to evaluators who could work
      effectively with stakeholders to promote learning through an evalu-
      ation process.
La Revue canadienne d’évaluation de programme
                                                                     79

      DPME also provided feedback on what they expected from program
      managers. Program managers need to be able to clearly identify
      when an evaluation is needed and clearly articulate their needs for
      an effective and useful evaluation. CLEAR AA cautioned that pro-
      gram managers cannot (and should not) be expected to be program
      evaluators, to which the DPME agreed, as this would be akin to ex-
      pecting program managers to also be auditors. It was also recognized
      that design and management of evaluation is different from doing
      evaluation, and that this should be addressed in the competencies
      for program managers who manage evaluations.

      DPME asked that program managers have the ability to think criti-
      cally and possess analytical thinking skills that they can apply to
      their own programs and to evaluations conducted of their programs.
      Additional skills and knowledge areas for this group include the
      willingness to self-examine and then understand how evaluation can
      be useful to them—they need to know when to request an evaluation
      and how to use the results.

      DPME also wanted program managers to understand how to budget
      for an evaluation and how that budget influences the evaluation ap-
      proach. For example, they need to understand why certain methods
      cost more and be aware of the expected benefits and challenges of
      different approaches and methods. This could enable them, for ex-
      ample, to choose a cheaper method and compromise on the depth of
      the evaluation and yet still answer their evaluation question. This
      type of knowledge can only be achieved with a certain amount of
      knowledge of the need for evaluation, evaluation approaches and
      methodology, budgeting, and practical experience.

      Finally, DPME emphasized that program managers would need to
      understand how to commission an evaluation and how to be com-
      pliant with the South African government’s rules, while retaining
      a certain amount of flexibility to allow for a feasible evaluation. A
      key point made by the DPME was that anyone who commissions an
      evaluation would need to understand how to select the “right” evalu-
      ator or evaluation team.

Drafting the competencies

      Several key factors informed the development of the draft compe-
      tency framework. This included the CLEAR AA report and subse-
      quent DPME and CLEAR AA conversations, the competencies of the
The Canadian Journal of Program Evaluation
80

     Canadian Evaluation Society and ANZEA in Aotearoa New Zealand,
     the Development Assistance Committee’s work on competencies, and
     South Africa’s unique cultural and political history. It took approxi-
     mately five months to develop the initial competencies, which were
     created for three role players:

          1. Program manager. This person manages the program and
             is usually the key intended user for the evaluation results.
             This person is often responsible for identifying the need for
             an evaluation.
          2. M&E advisor. This person is internal to the government
             department, often provides advice on the evaluation process,
             and is influential in both the evaluation and management
             decisions. This person’s work may also overlap with the
             evaluator role.
          3. Evaluator. This person may be internal or external to the
             government and is involved in designing and conducting the
             evaluation. The evaluator may conduct an evaluation on his
             or her own or with team members who bring complementary
             knowledge, skills, and abilities.

     The process of developing and defining the competencies had its chal-
     lenges. First, the operational model for government evaluations was
     still emerging, and the roles and responsibilities for each of the three
     roles were not well defined. For example, some people interviewed
     during the development of the competencies stated that the program
     manager should be doing the evaluation, while others felt that this
     management position should never be tasked with actually doing an
     evaluation. A lack of clarity about the role of each person complicated
     a discussion on what competencies each role needed to possess.

     Second, although multiple examples of skills and knowledge existed
     for competencies for evaluation specialists and evaluators (those
     who conduct evaluations), identifying competencies for managers
     of evaluations proved more difficult. Third, and equally challenging,
     was determining how to describe each competency (e.g., what does
     “cultural competence” mean when practically applied? How do people
     know if they, or others, are culturally competent?). Fourth was the
     challenge of how to determine what level of a certain skill or what
     knowledge was needed for each competency in each of the three roles.
     In addition, there was a discussion about whether to simply name
     competency levels (e.g., basic, intermediate, advanced) or to write ex-
     actly what was expected. In the end, the latter approach was adopted,
La Revue canadienne d’évaluation de programme
                                                                                                              81

         and the consultant and DPME drafted competency statements for
         each role and for each competence.

         Eventually, a product was created with the DPME and labelled
         the Evaluation Competency Framework (ECF). This framework
         describes the competencies (knowledge, skills, and abilities) in re-
         lation to four dimensions: (a) overarching considerations, (b) leader-
         ship, (c) evaluation craft, and (d) the implementation of evaluations.
         Each dimension is then divided into descriptive areas. For example,
         overarching considerations are divided into three areas that focus

Table 1
Snapshot of a Section of One of the Competency Dimensions

Domains/descriptors              Manager                      M&E Advisor                  Evaluator
4.3 Report writing and communication
RW1 – Writing:                   Can critique and provide     Can critique and provide     Can write clear, con-
Ability to write clear, con-     constructive feedback        constructive feedback        cise, and focused re-
cise, and focused reports        on reports to ensure that    on reports to ensure that    ports that are credible,
that are credible, useful,       they are credible, useful,   they are credible, useful,   useful, and actionable
and actionable and ad-           and actionable and ad-       and actionable and ad-       and address the key
dress the key evaluation         dress the key evaluation     dress the key evaluation     evaluation questions
questions                        questions                    questions
RW2 – Clear evidence             Can read evaluation          Can critique and provide     Can be clear and
in report:                       reports and identify key     constructive feedback        transparent about
Evidence for evaluation          issues, credibility of       ensuring that reports        methodological choices
choices, findings, and           findings, and logic of       are transparent about        and show the evidence,
recommendations in               argument                     methodological choices       analysis, synthesis,
evaluation report is clear                                    and show the evidence,       recommendations, and
and understood                                                analysis, synthesis,         evaluative interpreta-
                                                              recommendations, and         tion and how these
                                                              evaluative interpretation    build from each other
                                                              and how these build
                                                              from each other
RW2 – Communication:             Can advise on key mes- Can advise on key mes-             Can clearly articulate
Ability to clearly articulate,   sages for different key   sages for different key         and communicate key
communicate, and dis-            stakeholders and man-     stakeholders                    messages that are
seminate key messages            age the dissemination of                                  appropriately written
that are appropriately           information in a targeted                                 for different key stake-
written for different key        and timely manner                                         holders
stakeholders
RW5 – Use:                   Can select and present           Can select and present       Can select and present
Ability to identify, articu- findings to different            findings to different        findings to different
late, and support strategic stakeholders                      stakeholders                 stakeholders
use of data in the report
for the evaluation’s in-
tended use and users
The Canadian Journal of Program Evaluation
 82

      on contextual understanding and knowledge, ethical conduct, and
      interpersonal skills. These are then further broken down into more
      explicit detail. Ethical conduct, for instance, looks at government
      standards and ethics and personal ethics. The competency areas are
      also specifically described for each role, explaining the relevant level
      of competence. Table 1 provides a snapshot of a section of one of the
      competency dimensions.

Vetting and use of the competencies

      In early 2013, the DPME shared the competencies with the SAMEA
      Board of Directors, and presented them at three workshops with
      SAMEA members across the country. In August 2013, DPME led
      a process to consolidate the comments and make revisions. Pres-
      ently, these competencies are being applied by government, while at
      the same time being tested by different constituencies, such as the
      SAMEA membership. Two different government evaluation course
      curriculums have been developed based on these competencies, with
      the first course (Managing Evaluations) tested with various govern-
      ment departments at the national and provincial levels, and a second
      cycle started in August 2013.

      Although the process to develop evaluation standards is not reviewed
      in this article (this was the second part of the CLEAR AA task, which
      produced suggested evaluation standards), the application of the
      standards has yielded information that is relevant to the compe-
      tencies. DPME undertook an audit of 83 evaluations (not including
      meta-evaluations) conducted between 2006 and 2011 and applied
      the evaluation standards suggested by CLEAR AA. Of these, 12 fell
      below the minimum quality score of 3 (DPME, 2013). This can be
      seen as a quite positive result, as 71 out of 83 “passed,” but it was
      not a representative sample. Many departments (87%) are not doing
      evaluations, and the reports that were reviewed came from depart-
      ments that (a) could find the reports and (b) were prepared to submit
      them as part of the audit process (DPME, 2013).

      Although a critical interpretation may be that departments only
      submitted their strongest evaluations because they knew that the
      evaluations would be audited, the same logic would also demonstrate
      that most departments could identify a “good” evaluation. A key over-
      all finding in this research, which provides useful data on evaluator
      competencies, is that most evaluations were poor on capacity build-
La Revue canadienne d’évaluation de programme
                                                                      83

      ing of government and junior evaluators and reflected a challenge in
      working with evaluation consultants.

      DPME used the ECF to screen and identify evaluators and evaluation
      organizations that could provide evaluation services to government.
      Evaluation organizations were asked to apply the competencies by
      providing examples of that competency in another evaluation con-
      text. In addition, evaluators or evaluation organizations also had
      to demonstrate that they had undertaken five evaluations of over
      ZAR500,000 (approximately US$50,000) in the last five years. In
      practice, DPME staff are finding that the pool of strong evaluators
      within the 42 organizations and individuals identified (who actually
      bid for evaluations; many are latent) is not of significant size and
      relatively few strong organizations have emerged.

      DPME has started engaging with the Department of Public Service
      and Administration to have these competencies embedded in all rel-
      evant job descriptions and in criteria relevant to evaluation-specific
      functions within the public service. They also plan to engage with
      universities concerning how their courses prepare evaluators who
      implement evaluations in and for the government, and for govern-
      ment staff who manage evaluations.

CONCLUSION

      South Africa’s unique history and political context has influenced
      the development of monitoring and evaluation in government. Sig-
      nificant changes took place post-1994 leading to the GWMES, the
      establishment of the DPME, the national evaluation policy frame-
      work, the national evaluation plan, and the national evaluation sys-
      tem. These changes are bringing about the standardization of M&E
      in the government sector. As part of that, government has developed
      evaluation standards and evaluator competencies to guide program
      managers in commissioning and managing evaluations, to guide
      evaluation advisors and evaluators in government on strengthening
      their evaluation knowledge and skills, and to provide transparency
      in how government selects evaluators who consult for them. How
      the establishment of these competencies will influence evaluation in
      South Africa, how and if these competencies will improve the quality
      and usefulness of evaluation, and if this will result in government
      being more responsive and transparent to its citizens, remains to
      be seen.
The Canadian Journal of Program Evaluation
 84

REFERENCES

Basson, R. (2013). Volunteerism, consolidation, collaboration and growth:
     The case of SAMEA (South African Monitoring and Evaluation As-
     sociation). In J. Rugh & M. Segone (Eds.), Voluntary organisations
     for professional evaluation (VOPEs): Learning from Africa, Americas,
     Asia, Australasia, Europe and Middle East (pp. 262–274). Geneva:
     UNICEF. http://www.mymande.org/sites/default/files/UNICEF_NY_
     ECS_Book2_web.pdf.

Department of Monitoring and Evaluation (DPME). (2011). National evalu-
     ation policy framework. Pretoria: Author.

Department of Performance Monitoring and Evaluation (DPME). (2012a).
     Evaluation competencies. Retrieved from www.thePresidency-dpme.
     gov.za/dpmewebsite/.

Department of Performance Monitoring and Evaluation (DPME). (2012b).
     Presentation on the status and use of monitoring and evaluation in
     the SA public service. Pretoria: Author.

Department of Performance Monitoring and Evaluation (DPME). (2013).
     Results of management performance assessment. Pretoria: Author.

Goldman, I., Engela, R., Phillips, S., Akhalwaya, I., Gasa, N., Leon, B., &
     Mohamed, H. (2012). Establishing a national M&E system in South
     Africa (PREM Notes 21: Nuts and Bolts of M&E). World Bank.

Jacob, C. (2013, September). DPME and M&E in 2013. Presentation at the
       meeting of the South African Monitoring and Evaluation Association
       (SAMEA), Sandton, Pretoria, South Africa.

Kabane, N. (2011). An emerging funding crisis for South African civil
     society. Retrieved from www.afesis.org.za.

Mouton, C. (2010). Evaluation history (Unpublished doctoral dissertation).
     Stellenbosch University, Stellenbosch, SA.

Podems, D. (2004). A monitoring and evaluation intervention for donor-fund-
     ed NPOs in the developing world (Unpublished doctoral dissertation).
     Union Institute and University, Cincinnati, OH.

Podems, D. (2012). Evaluation standards and competencies for the South
     African government (Internal document). Pretoria: Department of
     Performance Monitoring and Evaluation.
La Revue canadienne d’évaluation de programme
                                                                      85

The Presidency. (2007). The policy framework for the government-wide M&E
      system. Pretoria: Author.

The Presidency. (n.d.). A planning framework for government medium-term
      strategic framework. Retrieved from www.thepresidency.gov.za.

Rugh, J. & Segone, M. (Eds.). (2013). Voluntary organisations for profes-
      sional evaluation (VOPEs): Learning from Africa, Americas, Asia,
      Australasia, Europe and Middle East. Geneva: UNICEF. http://www.
      mymande.org/sites/default/files/UNICEF_NY_ECS_Book2_web.pdf

South African Constitution. (1996). Pretoria: Government of South Africa.

      Donna Podems is a Research Fellow at the Centre for Research on
      Evaluation, Science and Technology (CREST) at Stellenbosch Uni-
      versity, where she also teaches in their evaluation program, and the
      founder and Director of OtherWISE: Research and Evaluation, which
      is a Cape Town-based evaluation consulting firm. She designs and
      implements evaluations and facilitates the development of monitor-
      ing and evaluation frameworks in Africa and Asia for governments,
      donors, and nonprofits. She holds a Master’s degree in Public Ad-
      ministration and a PhD in Interdisciplinary Studies with a focus on
      Program Evaluation and Organizational Development.

      Ian Goldman is the Head of Evaluation and Research at the De-
      partment of Performance Monitoring and Evaluation in the South
      African Presidency, responsible for establishing the national evalu-
      ation system. He has a particular interest in evaluation as a part
      of action-learning processes, and has worked extensively on three
      continents in the fields of rural development, decentralization, local
      economic development, community-driven development, and sustain-
      able livelihoods approaches.

      Christel Jacob is a Director of Evaluation within the Evaluation
      & Research Unit of the Department of Performance Monitoring &
      Evaluation within the Presidency of the Republic of South Africa.
You can also read