TEI Course Catalog - tei.cgu.edu - The Evaluators' Institute

Page created by Guy Stevenson
 
CONTINUE READING
TEI Course Catalog - tei.cgu.edu - The Evaluators' Institute
TEI
Course
Catalog

          tei.cgu.edu
TEI Course Catalog - tei.cgu.edu - The Evaluators' Institute
Delivering
capacity
building that
evaluators
need to
succeed.
TEI Course Catalog - tei.cgu.edu - The Evaluators' Institute
Contents
Evaluation Foundations ...................................................... 1
Applied Measurement for Evaluation........................... 1
Assessing and Developing Your Evaluator
Competencies .............................................................. 2
Basics of Program Evaluation ..................................... 2

                                                                                                                                                                          .8
                                                                                      Outcome and Impact Assessment ...............................8
                                                                                      Qualitative Evaluation Methods ....................................8
                                                                                      Quantitative Evaluation Methods .................................8
                                                                                      Sampling: Basic Methods for Probability and Non-
                                                                                      Probability Samples ......................................................8
                                                                                 3
                                                                                      Using Non-Experimental Designs for Impact
Ethics in Practice: A Global Perspective ..................... 3                      Evaluation .....................................................................9
Evaluation Research Methods: A Survey of                                              Using Program Theory and Logic Models in Evaluation
Quantitative and Qualitative Approaches .................... 4                        ......................................................................................9
Foundations and Contemporary Issues in Evaluation                                     Using Research, Program Theory, & Logic Models to
Practice ........................................................................ 4   Design and Evaluate Programs....................................9
Informing Practice Using Evaluation Models and
Theories ....................................................................... 4    Evaluation Approaches and Techniques ......................... 10
M&E: Frameworks and Fundamentals ........................ 5                           Comparative Effectiveness: Balancing Design with
Professional Standards and Principles for Ethical                                     Quality Evidence ....................................................... 10
Evaluation Practice ...................................................... 6          Developmental Evaluation: Systems and Complexity 11
Working with Evaluation Stakeholders ........................ 6

Evaluation Theory, Design, and Methods ......................... 6
Case Studies in Evaluation .......................................... 6
Conducting Successful Evaluation Surveys ................ 7
Designing, Managing, and Analyzing Multi-Site
Evaluations .................................................................. 7
TEI Course Catalog - tei.cgu.edu - The Evaluators' Institute
................................................................................... 11   ................................................................................... 16
Evaluability Assessment ............................................ 11                  Utilization-Focused Evaluation .................................. 16
Evaluating Resource Allocations in Complex
                                                                                         Using Evaluation–Strategies and Capacity.......................17
Environments ............................................................. 12
Evaluating Training Programs: Frameworks and                                             Culture and Evaluation .............................................. 17
Fundamentals ............................................................ 12             Dashboard Design ..................................................... 17
Internal Evaluation: Building Organizations from Within                                  Effective Reporting Strategies for Evaluators ........... 17
................................................................................... 13   Evaluation Capacity Building in Organizations.......... 18
Linking Evaluation Questions to Analysis Techniques                                      Evaluation Management............................................ 18
................................................................................... 13
                                                                                         Foundations in Data Visualization ............................. 19
Measuring Performance and Managing for Results in
Government and Nonprofit Organizations ................. 14                              How to Build a Successful Evaluation Consulting
                                                                                         Practice...................................................................... 19
Mixed-Methods Evaluations: Integrating Qualitative
and Quantitative Approaches .................................... 15                      Implementation Analysis for Feedback on Program
                                                                                         Progress and Results ................................................ 19
Participatory Evaluation: Frameworks, Approaches,
Appropriateness and Challenges .............................. 15                         Learning through Data Visualization ......................... 19
Policy Analysis, Implementation, and Evaluation ...... 15                                Leveraging Technology in Evaluation ....................... 20
Policy Evaluation and Analysis .................................. 16                     Making Evaluation Data Actionable .......................... 20
TEI Course Catalog - tei.cgu.edu - The Evaluators' Institute
................................................................................... 21   ................................................................................... 25
Presenting Data Effectively: Practical Methods for                                       Introduction to Cost-Benefit and Cost-Effectiveness
Improving Evaluation Communication ....................... 21                            Analysis ..................................................................... 25
Strategic Planning with Evaluation in Mind ............... 21                            Intermediate Cost-Benefit and Cost-Effectiveness
Strategy Mapping....................................................... 22               Analysis ..................................................................... 25
Systems Evaluation ................................................... 23                Introduction to R Programming for Data Analysis and
                                                                                         Visualization .............................................................. 26
Using Program Evaluation in Nonprofit Environments
................................................................................... 23   Needs Assessment ................................................... 26
                                                                                         Practical Meta-Analysis: Summarizing Results Across
Analytic Approaches ........................................................ 24          Studies....................................................................... 26
Applied Regression Analysis for Evaluators (computer                                     Qualitative Data Analysis .......................................... 27
lab) ............................................................................. 24    Intermediate Qualitative Data Analysis ..................... 27
Applied Statistics for Evaluators (computer lab) ........ 24                             Social and Organizational Network Analysis—
Hierarchical Linear Modeling ..................................... 24                    Evaluating the Way Individuals and Organizations
                                                                                         Interact....................................................................... 28
TEI Course Catalog - tei.cgu.edu - The Evaluators' Institute
TEI Course Descriptions
                                                               validity, error, etc.), and the ability of the measure to
Evaluation Foundations 1                                       detect change over time.

Applied Measurement for Evaluation                             Quantification: Measurement is essentially assigning
                                                               numbers to what is observed (direct and inferential).
Instructor: Ann M. Doucette, PhD                               Decisions about how we quantify observations and the
Description: Successful evaluation depends on our ability      implications these decisions have for using the data
to generate evidence attesting to the feasibility,             resulting from the measures, as well as for the objectivity
relevance, and/or effectiveness of the interventions,          and certainty we bring to the judgment made in our
services, or products we study. While theory guides our        evaluations will be examined. This section of the course
designs and how we organize our work, it is                    will focus on the quality of response options/coding
measurement that provides the evidence we use in               categories—do such categories segment the respondent
making judgments about the quality of what we evaluate.        sample in meaningful and useful ways?
Measurement, whether it results from self-report survey,       Issues and considerations—using existing measures
interview/focus groups, observation, document review,          versus developing your own measures: What to look for
or administrative data, must be systematic, replicable,        and how to assess whether existing measures are
interpretable, reliable, and valid. While hard sciences such   suitable for your evaluation project will be examined.
as physics and engineering have advanced precise and           Issues associated with the development and use of new
accurate measurement (i.e., weight, length, mass,              measures will be addressed in terms of how to establish
volume), the measurement used in evaluation studies is         sound psychometric properties, and what cautionary
often imprecise and characterized by considerable error.       statements should accompany interpretation and
The quality of the inferences made in evaluation studies is    evaluation findings using these new measures.
directly related to the quality of the measurement on
which we base our judgments. Judgments attesting to            Criteria for choosing measures: Assessing the adequacy
the ineffective interventions may be flawed—the                of measures in terms of the characteristics of
reflection of measures that are imprecise and not              measurement —choosing measures that fit your
sensitive to the characteristics we chose to evaluate.         evaluation theory and evaluation focus (exploration,
Evaluation attempts to compensate for imprecise                needs assessment, level of implementation, process,
measurement with increasingly sophisticated statistical        impact and outcome). Measurement feasibility,
procedures to manipulate data. The emphasis on                 practicability, and relevance will be examined. Various
statistical analysis all too often obscures the important      measurement techniques will be examined in terms of
characteristics of the measures we choose. This class          precision and adequacy, as well as the implications of
content will cover:                                            using screening, broad-range, and peaked tests.

Assessing measurement precision: Examining the                 Error—influences on measurement precision: The
precision of measures in relationship to the degree of         characteristics of various measurement techniques,
accuracy that is needed for what is being evaluated.           assessment conditions (setting, respondent interest,
Issues to be addressed include: measurement/item bias,         etc.), and evaluator characteristics will be addressed.
the sensitivity of measures in terms of developmental
and cultural issues, scientific soundness (reliability,

1
    Required for Certificate of Evaluation Practice

                                                                              The Evaluators Instititute │ Course Catalog 1
TEI Course Catalog - tei.cgu.edu - The Evaluators' Institute
Assessing and Developing Your Evaluator                        Using the results of the assessment, we will help you
Competencies                                                   implement your personal evaluator competency plan
                                                               following the Guiding Principles for Evaluators, the AEA
Instructor: Tessie Catsambas, MPP                              Cultural Competency Statement, and with an eye toward
OR Stewart Donaldson, PhD                                      meeting the Program Evaluation Standards in your
Description: In 2018, for the first time in its history, the   practice.
American Evaluation Association (AEA) membership               Your plan will answer questions such as:
voted to support the adoption of a set of professional
competencies for U.S. evaluators. This new set of                 How do I characterize my strengths as an evaluator?
competencies complements the AEA Guiding Principles               What types of evaluations am I prepared to undertake?
for Evaluators, the AEA Statement encouraging                     In what areas do I need to strengthen my
evaluators to follow culturally competent evaluation               competencies?
practices, and the AEA Program Evaluation Standards.              How do I invest in my self-development and growth in
This growing body of professional knowledge has been               the profession of evaluation?
systematically developed over time in an effort to help           The course objectives include:
evaluators learn how to practice evaluation at an                 Understanding the history and influences of
exemplary level, and improve the quality of evaluation             professional evaluation in the United States
services available in society. These evaluation services are      Becoming familiar with the new evaluator
often provided in support of stakeholders’ pursuits of             competencies, the AEA Guiding Principles, ways to
social betterment and social justice. Past AEA President           achieve Cultural Competency in Evaluation Practice,
and experienced evaluation educator Stewart Donaldson              and Standards for Contemporary Evaluation Practice
(2015) and AEA President Tessie Catsambas (2019) have             Helping you assess your current strengths and needs
designed this course to help you assess how well you are           across the five evaluator competency domains
currently doing on the five evaluation competency                 Helping you develop a plan to strengthen your
domains (see below), and develop a plan to strengthen              knowledge and skills across the five evaluator
each domain.                                                       competency domains
                                                                  Enhance your ability to practice ethically sound,
1.0. Professional Domain – How prepared are you in                 culturally competent evaluation across a wide range of
competencies that make evaluators distinct as a                    evaluation practice settings
profession?                                                       Becoming familiar with the vast number of career
2.0. Methodology – How skilled are you in technical                opportunities for internal and external professional
aspects of inquiry, such as framing questions, designing           evaluators
studies, sampling, collection, analyzing data, interpreting
results, and reporting findings?
                                                               Basics of Program Evaluation
                                                               Instructor: Arnold Love, PhD
3.0. Context Domain – How prepared are you to
understand the unique circumstances and settings of            Description: With an emphasis on constructing a sound
evaluations, and their users/stakeholders?                     foundational knowledge base, this course is designed to
                                                               provide an overview of both past and contemporary
4.0. Management Domain – How prepared are you to               perspectives on evaluation theory, method, and practice.
manage evaluations—both the logistics (such as                 Course topics include, but are not limited to, basic
determining and monitoring work plans, timelines, and          evaluation concepts and definitions; evaluation as a
resources) and optimizing management decisions in              cognitive activity; the view of evaluation as a trans-
support of sound methodology?                                  discipline; the general and working logic of evaluation; an
                                                               overview of the history of the field; distinctions between
5.0. Interpersonal Domain – How prepared are you to
                                                               evaluation and basic and applied social science research;
manage social interactions that ground evaluator’s
                                                               evaluation-specific methods (e.g., needs assessment,
effectiveness?
                                                               stakeholder analysis, identifying evaluative criteria,
                                                               standard setting); reasons and motives for conducting

                                                                              The Evaluators Instititute │ Course Catalog 2
evaluation; central types and purposes of evaluation;
objectivity, bias, and validity; the function of program
theory in evaluation; evaluator roles; core competencies;
audiences and users of evaluation; alternative evaluation
models and approaches; the political nature of evaluation
and its implications for practice; professional standards
and codes of conduct; and emerging and enduring issues
in evaluation theory, method, and practice. Although the
major focus of the course is program evaluation in
multiple settings (e.g., education, criminal justice, health
and medicine, human and social services, international
development, science and technology), examples from
personnel evaluation, policy analysis, and product
evaluation also will be used to illustrate foundational
concepts. The course will conclude with how to plan,
design, and conduct high-quality evaluations using a
contingency-based and situational approach, including
evaluation purposes, resources (e.g., time, budget,
expertise), uses and users, competing demands, and
other relevant contingencies. Throughout the course,           Ethics in Practice: A Global Perspective
active learning is emphasized and, therefore, the              Instructor: Michael Quinn Patton, PhD
instructional format consists of instructor-led
                                                               Description: The course will compare and contrast
presentations, discussions, and application exercises.
                                                               various ethical guidance statements for evaluators from
Audiences for this course include those who have
                                                               around the world, including the OECD/DAC Quality
familiarity with social science research but are unfamiliar
                                                               Standards for Development Evaluation, the Joint
with evaluation, and evaluators who wish to review
                                                               Committee Standards, and ethical guidance adopted by
current theories, methods, and practices.
                                                               national evaluation associations. The course will examine
Prerequisites: Basic knowledge of social science research      overarching ethical frameworks for evaluation: Universal
methods.                                                       Declaration of Human Rights; Sustainability; the Paris
                                                               Declaration Principles on Development Aid; and
                                                               principles for conducting research with indigenous
                                                               people.

                                                               Professional evaluation associations and networks
                                                               around the world have adopted ethical guidelines,
                                                               standards, and principles. These recognize that
                                                               evaluators can and do face a variety of daunting ethical
                                                               challenges. The political, cultural, and contextual
                                                               variations that evaluators face mean that judgment must
                                                               be exercised about what is appropriate in a particular
                                                               situation. Few rules can be applied. Rather, ethical
                                                               guidelines, standards, and principles have to be
                                                               interpreted. Tough judgment calls must be made about
                                                               what to do. This course is about those interpretation and
                                                               judgment processes. Ethical judgments apply at every
                                                               stage of evaluation, in initial interactions with
                                                               stakeholders, in design decisions, throughout data
                                                               collection, and in analyzing, reporting, and facilitating use
                                                               of findings. Much of the course will be examining specific

                                                                             The Evaluators Instititute │ Course Catalog 3
ethical challenges commonly reported among evaluators          purposes, and potential benefits of evaluation, ethics,
working internationally. Participants will also have an        professional guidelines and standards, evaluator
opportunity to share and discuss their own experiences         competencies including cultural competency, the basics
in dealing with ethical challenges.                            of validity and evaluation design sensitivity, how to
                                                               collect credible and actionable evidence, and an overview
The course is based on the TEI premise that ethical
                                                               of the variety evaluation approaches (theories) that
practice is one of the emergent competencies in
                                                               guide practice today.
evaluation: competent evaluators are ethical evaluators.
The outcomes of the course are: participants will know         Through mini-lectures, small group and class discussions,
the ethical standards of evaluation as an international        and case exercises you will:
profession; have increased confidence that they can
                                                               Become familiar with state-of-the-art evaluation
wisely, astutely, and effectively apply ethical standards in
                                                               approaches, concepts, and methods;
their own practice; and have a deeper sense of
professionalism as a result of being more deeply               Learn about guiding principles, evaluator competencies,
grounded in the ethical foundations of evaluation.             how to achieve cultural competency in evaluation
                                                               practice, and standards for modern evaluation practice;
Evaluation Research Methods: A Survey of
Quantitative and Qualitative Approaches                        Explore a wide range of applications of evaluation that
                                                               you can use to improve your work and career, and learn
Instructor: David B. Wilson, PhD                               about the vast number of emerging career opportunities
Description: This course will introduce a range of basic       for professional evaluators.
quantitative and qualitative social science research
                                                               Recommended background readings include:
methods that are applicable to the evaluation of various
programs. This is a foundational course that introduces        What is Evaluation? American Evaluation Association
methods developed more fully in other TEI courses and
serves as a critical course designed to ensure a basic         AEA’s Guiding Principles for Evaluators
familiarity with a range of social science research            AEA’s Evaluator Competencies
methods and concepts.
                                                               AEA’s Statement on Cultural Competency in
Topics will include observational and qualitative              Evaluation
methods, survey and interview (structured and
unstructured) techniques, experimental and quasi-              Informing Practice Using Evaluation Models
experimental designs, and sampling methods. This course        and Theories
is for those who want to update their existing knowledge
                                                               Instructor: Melvin M. Mark, PhD
and skills and will serve as an introduction for those new
to the topic.                                                  Description: Evaluators who are not aware of the
                                                               contemporary and historical aspects of the profession
Foundations and Contemporary Issues in                         “are doomed to repeat past mistakes and, equally
Evaluation Practice                                            debilitating, will fail to sustain and build on past
Instructor: Stewart Donaldson, PhD                             successes.” Madaus, Scriven, and Stufflebeam (1983).

Description: This course will provide participants with        “Evaluation theories are like military strategy and tactics;
an overview of the foundations of professional                 methods are like military weapons and logistics. The
evaluation practice, and explore current opportunities         good commander needs to know strategy and tactics to
and challenges facing evaluators today. It also aims to        deploy weapons properly or to organize logistics in
provide a solid introduction, overview, or refresher on        different situations. The good evaluator needs theories
the latest developments in evaluation practice, and to         for the same reasons in choosing and deploying
prepare participants for intermediate and advanced level       methods.” Shadish, Cook, and Leviton (1991).
TEI courses. Key topics will include the history of
evaluation theory and practice, the various uses,

                                                                             The Evaluators Instititute │ Course Catalog 4
These quotes from Madaus et al. and Shadish et al.             pathways, likelihood of program sustainability, the
provide the perfect rationale for why the serious              presence of program strengths and weaknesses, the
evaluator should be concerned with models and theories         value, merit and worth of the initiative, and the like. The
of evaluation. The primary purpose of this class is to         increased emphasis on effectively managing toward
overview major streams of evaluation theories (or              favorable results demands a more comprehensive M&E
models) and to consider their implications for practice.       evaluation approach in order to identify whether
Topics include: (1) why evaluation theories matter, (2)        programs are favorably on track or whether improved
frameworks for classifying different theories, (3) in-depth    program strategies and mid-course corrections are
examination of 4-6 major theories, (4) identification of       needed.
key issues on which evaluation theories and models             The two-day, interactive course will cover the following:
differ, (5) benefits and risks of relying heavily on any one
theory, and (6) tools and skills that can help you in             M&E introduction and overview
picking and choosing from different theoretical                   Defining the purpose and scope of M&E
perspectives in planning an evaluation in a specific              Engaging stakeholders and establishing and evaluative
context. The overarching theme will be on practice                 climate
implications—that is, on what difference it would make              The role and effect of partnership and boundary
for practice to follow one theory or some other.                      spanners, policy, and advocacy
                                                                  Identifying and supporting needed capabilities
Theories to be discussed will be ones that have had a             M&E frameworks—agreement on M&E targets
significant impact on the evaluation field, that offer              Performance and results-based M&E approaches
perspectives with major implications for practice, and            Connecting program design and M&E frameworks
that represent important and different streams of theory            Comparisons—Is a counterfactual necessary?
and practice. Case examples from the past will be used to           Contribution versus attribution
illustrate key aspects of each theory’s approach to               Identification of key performance indicators (KPIs)
practice.                                                           Addressing uncertainties and complexity
Participants will be asked to use the theories to question        Data: collection and methods
their own and others’ practices and to consider what                Establishing indicator baselines (addressing the
characteristics of evaluations will help increase their               challenges of baseline estimates)
potential for use.                                                  What data exists? What data/information needs to be
                                                                      collected?
The instructor’s assumption will be that most people              Measuring progress and success—contextualizing
attending the session have some general familiarity with           outcomes and setting targets
the work of a few evaluation theorists, but that most will          Time to expectancy—what can be achieved by the
not themselves be scholars of evaluation theory. At the               program?
same time, the course should be useful, whatever one’s            Using and reporting M&E findings
level of familiarity with evaluation theory.                      Sustaining M&E culture

M&E: Frameworks and Fundamentals                               The course focuses on practical application. Course
                                                               participants will have a comprehensive understanding of
Instructor: Ann M. Doucette, PhD
                                                               M&E frameworks and fundamentals, M&E tools, and
Description: The overall goal of Monitoring and                practice approaches. Case examples will be used to
Evaluation (M&E) is the assessment of program progress         illustrate the M&E process. Course participants are
to optimize outcome and impact program results. While          encouraged to submit their own case examples, prior to
M&E components overlap, there are distinct                     the course for inclusion in the course discussion. The
characteristics of each. Monitoring activities systemically    course is purposefully geared toward evaluators working
observe (formal and informal) assumed indicators of            in developing and developed countries; national and
favorable results, while evaluation activities build on        international agencies, organizations, and NGOs; and,
monitoring indicator data to assess intervention/program       national, state, provincial, and county governments.
effectiveness, the adequacy of program impact                  Familiarity with evaluation is helpful, but not required.

                                                                              The Evaluators Instititute │ Course Catalog 5
Professional Standards and Principles for                      with them. Stakeholder characteristics like knowledge of
Ethical Evaluation Practice                                    the program, power and ability to influence, willingness
                                                               to participate, etc., will be analyzed, and strategies and
Instructor: Michael Morris, PhD                                techniques are presented to successfully engage
Description: Participants will explore the ethical issues      stakeholders for effective collaboration. Detailed course
that can arise at various stages of the evaluation process,    materials, case examples, and readings are provided to
from entry/contracting all the way to the utilization of       illuminate course content and extend its long-term
findings by stakeholders. Strategies for preventing            usefulness.
ethical problems, as well for dealing with them once they
have arisen, will be addressed. Case vignettes will be
used throughout the course to provide participants with        Evaluation Theory,
an opportunity to brainstorm such strategies, and
participants will have a chance to share their own ethical     Design, and Methods
challenges in evaluation with others. This course will also
focus on the application of the American Evaluation            Case Studies in Evaluation
Association’s Guiding Principles for Evaluators and the
                                                               Instructor: Delwyn Goodrick, PhD
Joint Committee’s Program Evaluation Standards to the
ethical responsibilities and challenges that evaluators        Description: Case study approaches are widely used in
encounter in their work.                                       program evaluation. They facilitate an understanding of
                                                               the way in which context mediates the influence of
The course is based on the TEI premise that ethical            program and project interventions. While case study
practice is a core competency in evaluation: competent         designs are often adopted to describe or depict program
evaluators are ethical evaluators. Participants should         processes, their capacity to illuminate depth and detail
emerge from the course with an enhanced                        can also contribute to an understanding of the
understanding of how the standards and principles that         mechanisms responsible for program outcomes.
inform the professional practice of evaluation can
increase their chances of “doing the (ethically) right         The literature on case studies is impressive, but there
thing” when conducting evaluations in the field.               remains tension in perspectives about what constitutes
Participants should also be better prepared to interact        good case study practice in evaluation. This leads to
with stakeholders in a fashion that lessens the likelihood     substantive differences in the way case studies are
that the latter will engage in behaviors that lead to          conceived and practiced within the evaluation
ethical difficulties.                                          profession. This workshop aims to disentangle the
                                                               discussions and debate, and highlight the central
Working with Evaluation Stakeholders                           principles critical to effective case study practice and
                                                               reporting.
Instructor: John Bryson, PhD
Description: Working with stakeholders is a fact of life for   This two-day workshop will explore case study design,
evaluators. That interaction can be productive and             analysis, and representation. The workshop will address
beneficial to evaluation studies that inform decisions and     case study topics through brief lecture presentation,
produce positive outcomes for decision makers and              small group discussion, and workshop activities with
program recipients. Or that interaction can be draining        realistic case study scenarios. Participants will be
and conflictual for both the evaluator and the                 encouraged to examine the conceptual underpinnings,
stakeholders and lead to studies that are misguided, cost      defining features, and practices involved in doing case
too much, take too long, never get used, or never get          studies in evaluation contexts. Discussion of the ethical
done at all. So this is an incredibly important topic for      principles underpinning case studies will be integrated
evaluators to explore. This course focuses on strategies       throughout the workshop.
and techniques to identify stakeholders who can and will
                                                               Specific topics to be addressed over the two days
be most beneficial for the achievement of study goals
                                                               include:
and how to achieve a productive working relationship

                                                                             The Evaluators Instititute │ Course Catalog 6
   The utility of case studies in evaluation                  world survey examples and case studies. Participants will
   Circumstances in which case studies may not be             apply what they are learning in activities and will have
    appropriate                                                ample opportunity to ask questions during the course (or
   Evaluation questions that are suitable for a case study    during breaks) and to discuss the survey challenges they
    approach                                                   face with the instructor and other participants.
   Selecting the unit of analysis in case study               Participants will receive a copy of course slides and other
   Design frameworks in case studies—single and               course readings/course materials.
    multiple case study; the intrinsic and instrumental case
   The use of mixed methods in case study approaches—         Designing, Managing, and Analyzing Multi-
    sequential and concurrent designs                          Site Evaluations
   Developing case study protocols and case study guides
                                                               Instructor: Debra J. Rog, PhD
   Analyzing case study materials—within case and cross-
    case analysis, matrix and template displays that           Description: Guidance on how to carry out multi-site
    facilitate analysis                                        evaluations is scarce. What is available tends to focus on
   Principles and protocols for effective teamwork in         quantitative data collection and analysis and usually
    multiple case study approaches                             treats diverse sites in a uniform manner. This course will
   Transferability/generalizability of case studies           present instruction on designing, managing, and
   Validity and trustworthiness of case studies               analyzing multi-site studies and will focus on the
   Synthesizing case materials                                differences that are required due to the specifics of the
   Issues of representation of the case and cases in          situation—e.g., central evaluator control vs. interactive
    reporting                                                  collaboration; driven by research vs. program interests;
                                                               planned and prospective vs. retrospective; varied vs.
Detailed course notes will be provided to all participants     standardized sites; exploratory vs. confirmatory purpose;
and practice examples referenced over the two days.            and data that are exclusively quantitative vs. qualitative
                                                               vs. mixture. Topics include stakeholder involvement,
Conducting Successful Evaluation Surveys
                                                               collaborative design, maintaining integrity/quality in data,
Instructor: Jolene D. Smyth, PhD                               monitoring and technical assistance, data submission,
Description: The success of many evaluation projects           communication and group process, cross-site synthesis
depends on the quality of survey data collected. In the        and analysis, and cross-site reporting and dissemination.
last decade, sample members have become increasingly           Practical strategies learned through first-hand experience
reluctant to respond, especially in evaluation contexts. In    as well as from review of other studies will be shared.
response to these challenges and to technological              Teaching will include large- and small-group discussions
innovation, methods for doing surveys are changing             and students will work together on several problems.
rapidly. This course will provide new and cutting-edge         Detailed course materials are provided.
information about best practices for designing and             Prerequisites: Understanding of evaluation and research
conducing internet, mail, and mixed-mode surveys.              design.
Students will gain an understanding of the multiple
sources of survey error and how to identify and fix
commonly occurring survey issues. The course will cover
writing questions; visual design of questions (drawing on
concepts from the vision sciences); putting individual
questions together into a formatted questionnaire;
designing web surveys; designing for multiple modes;
and fielding surveys and encouraging response by mail,
web, or in a mixed-mode design.

The course is made up of a mixture of PowerPoint
presentation, discussion, and activities built around real-

                                                                             The Evaluators Instititute │ Course Catalog 7
credible and useful, the unique sampling, design, and
                                                              analysis approaches of qualitative methods must be
                                                              understood and used. Qualitative data can be used for
                                                              various purposes including evaluating individualized
                                                              outcomes, capturing program processes, exploring a new
                                                              area of interest (e.g., to identify the unknown variables
                                                              one might want to measure in greater depth/breadth),
                                                              identifying unanticipated consequences, and side effects,
                                                              supporting participatory evaluations, assessing quality,
                                                              and humanizing evaluations by portraying the people and
                                                              stories behind the numbers. This class will cover the
                                                              basics of qualitative evaluation, including design, case
                                                              selection (purposeful sampling), data collection
                                                              techniques, and beginning analysis. Ways of increasing
                                                              the rigor and credibility of qualitative evaluations will be
                                                              examined. Mixed methods approaches will be included.
                                                              Alternative qualitative strategies and new, innovative
                                                              directions will complete the course. The strengths and
                                                              weaknesses of various qualitative methods will be
Outcome and Impact Assessment                                 identified. Exercises will provide experience in applying
Instructor: Mark W. Lipsey, PhD                               qualitative methods and analyses in evaluations. Patton’s
                                                              text, Qualitative Research and Evaluation Methods, (Sage)
Description: Valid assessment of the outcomes or impact       by Rossi et al., is recommended pre-reading for this
of a social program is among the most challenging             course.
evaluation tasks, but also one of the most important. This
course will review monitoring and tracking approaches to      Quantitative Evaluation Methods
assessing outcomes as well as the experimental and
                                                              Instructor: Emily Tanner-Smith, PhD
quasi-experimental methods that are the foundation for
contemporary impact evaluation. Attention will also be        Description: This course will introduce a range of basic
given to issues related to the measurement of outcomes,       quantitative social science research methods that are
ensuring detection of meaningful program effects, and         applicable to the evaluation of programs. This is a
interpreting the magnitude of effects. Emphasis will          foundational course that introduces basic quantitative
mainly be on the logic of outcome evaluation and the          methods developed more fully in other TEI courses and
conceptual and methodological nature of the                   serves as a critical course designed to ensure a basic
approaches, including research design and associated          familiarity with a range of social science research
analysis issues. Nonetheless, some familiarity with social    methods and concepts.
science methods and statistical analysis is necessary to
effectively engage the topics covered in this course.         Topics will include validity, sampling methods,
                                                              measurement considerations, survey and interview
Prerequisites: At least some background in social science     techniques, observational and correlational designs, and
methods and statistical analysis or direct experience with    experimental and quasi-experimental designs. This
outcome measurement and impact assessment designs.            course is for those who want to update their existing
                                                              knowledge and skills and will serve as an introduction for
Qualitative Evaluation Methods                                those new to the topic.
Instructor: Michael Quinn Patton, PhD
                                                              Sampling: Basic Methods for Probability
Description: Qualitative inquiries use in-depth interviews,
focus groups, observational methods, document
                                                              and Non-Probability Samples
analyses, and case studies to provide rich descriptions of    Instructor: Gary T. Henry, PhD
people, programs, and community processes. To be

                                                                            The Evaluators Instititute │ Course Catalog 8
Description: Careful use of sampling methods can save           Prerequisites: This class assumes some familiarity with
resources and often increase the validity of evaluation         research design, threats to validity, impact evaluations,
findings. This seminar will focus on the following: (a) The     and multivariate regression.
Basics: defining sample, sampling and validity, probability
and non-probability samples, and when not to sample;            Using Program Theory and Logic Models in
(b) Error and Sampling: study logic and sources of error,       Evaluation
target population and sampling frame; (c) Probability
                                                                Instructor: Patricia Rogers, PhD
Sampling Methods: simple random sampling, systematic
sampling, stratified sampling, cluster sampling, and multi-     Description: It is now commonplace to use program
stage sampling; (d) Making Choices: before, during, and         theory, or logic models, in evaluation as a means to
after sampling; and (e) Sampling Issues. Many examples          explain how a program is understood to contribute to its
will be used to illustrate these topics and participants will   intended or observed outcomes. However, this does not
have the opportunity to work with case exercises.               mean that they are always used appropriately or to the
                                                                best effect. At their best, logic models can provide
Using Non-Experimental Designs for Impact                       conceptual clarity, motivate staff, and focus evaluations.
Evaluation                                                      At their worst, they can divert time and attention from
                                                                other critical evaluation activities, provide an invalid or
Instructor: Gary T. Henry, PhD
                                                                misleading picture of the program, and discourage critical
Description: In the past few years, there have been very        investigation of causal pathways and unintended
exciting developments in approaches to causal inference         outcomes. This course focuses on developing useful logic
that have expanded our knowledge and toolkit for                models, and using them effectively to guide evaluation
conducting impact evaluations. Evaluators, statisticians,       and avoid some of the most common traps. It begins
and social scientists have focused a great deal of              with the assumption that participants already know
attention on causal inference, the benefits and                 something about logic models and program theory but
drawbacks of random assignment studies, and                     come with different understandings of terminology and
alternative designs for estimating program impacts. For         options. Application exercises are used throughout the
this workshop, we will have three goals:                        course for demonstration of concepts and techniques:
                                                                (a) as ways to use logic models to positive advantage
   To understand a general theory of causal inference that
                                                                (e.g., to identify criteria, develop questions, identify data
    covers both random assignment and observational
                                                                sources and bases of comparison); (b) ways they are
    studies, including quasi-experimental and non-
                                                                used with negative results (e.g., focusing only on
    experimental studies
                                                                intended outcomes, ignoring differential effects for client
   To identify the assumptions needed to infer causality in
                                                                subgroups, seeking only evidence that confirms the
    evaluations
                                                                theory); and (c) strategies to avoid traps (e.g.,
   To describe, compare, and contrast six promising
                                                                differentiated theory, market segmentation, competitive
    alternatives to random assignment studies for inferring
                                                                elaboration of alternative hypotheses). The instructor’s
    causality, including the requirements for implementing
                                                                co-authored text, Purposeful Program Theory: Effective
    these designs, the strengths and weaknesses of each,
                                                                Use of Theories of Change and Logic Models (Jossey-Bass:
    and examples from evaluations where these designs
                                                                Wiley), is recommended pre-reading for this course.
    have been applied
                                                                Prerequisites: Prior to attendance, those with no
The six alternative designs to be described and discussed
                                                                previous experience with program theory should work
are: regression discontinuity; propensity score matching;
                                                                through the University of Wisconsin Extension’s course in
instrumental variables; fixed effects (within unit
                                                                ‘Enhancing Program Performance with Logic Models’,
variance); difference-in-differences; and comparative
                                                                available at no cost at https://lmcourse.ces.uwex.edu/.
interrupted time series. Also, current findings concerning
the accuracy of these designs relative to random                Using Research, Program Theory, & Logic
assignment studies from “within study” assessments of
                                                                Models to Design and Evaluate Programs
bias will be presented and the implications for practice
discussed.                                                      Instructor: Stewart Donaldson, PhD

                                                                              The Evaluators Instititute │ Course Catalog 9
Description: It is now commonplace to use research,           evidence, random clinical trials (RCTs) have remained the
program theory, and logic models in evaluation practice.      “gold standard” in establishing effectiveness, impact,
They are often used to help design effective programs,        and causality, despite the fact that strong proponents of
and other times as a means to explain how a program is        RCTs sometimes assert that RCTs are not the only valid
understood to contribute to its intended or observed          method, nor necessarily the optimal approach in
outcomes. However, this does not mean that they are           gathering evidence. RCTs can be costly in terms of time
always used appropriately or to the best effect. At their     and resources; can raise ethical concerns regarding the
best, prior research, program theories, and logic models      exclusion of individuals from treatments or interventions
can provide an evidence-base to guide action, conceptual      from which they might benefit; and can be inappropriate
clarity, motivate staff, and focus design and evaluations.    if the intervention is not sufficiently and stably
At their worst, they can divert time and attention from       implemented or if the program/service is so complex that
other critical evaluation activities, provide an invalid or   such a design would be challenging at best and likely not
misleading picture of the program, and discourage critical    to yield ecologically valid results.
investigation of causal pathways and unintended
                                                              Comparative effectiveness (CE) has emerged as an
outcomes. This course will focuses on developing useful
                                                              accepted approach in gathering evidence for health care
evidence-based program theories and logic models, and
                                                              decision and policymaking. CE emerged as a consequence
using them effectively to guide evaluation and avoid
                                                              of the worldwide concern about rising health care costs
some of the most common traps. Application exercises
                                                              and the variability of health care quality—and a more
are used throughout the course for demonstration of
                                                              immediate need for evidence of effective health care.
concepts and techniques: (a) as ways to use social
                                                              RCTs, while yielding strong evidence, were time intensive
science theory and research, program theories and logic
                                                              and posed significant delays in providing data on which
models to positive advantage; (b) to formulate and
                                                              to make timely policy and care decisions. CE provided a
prioritize key evaluation questions; (c) to gather credible
                                                              new approach to gather objective evidence and
and actionable evidence; (d) to understand and
                                                              emphasized how rigorous evaluation of the data yielded
communicate ways they are used with negative results;
                                                              across existing studies (qualitative and quantitative)
and (e) strategies to avoid traps.
                                                              could answer questions regarding what works for whom
Recommended Book: Program Theory-Driven                       and under what conditions. Essentially, CE is a rigorous
Evaluation Science: Strategies and                            evaluation of the impact of various intervention options,
Applications (Psychology Press).                              based on existing studies that are available for specific
                                                              populations. The CE evaluation of existing studies focuses
Students may also be interested in: Credible and
                                                              not only on the benefits and risks of various
Actionable Evidence: The Foundation for Rigorous and
                                                              interventions, but also incorporates the costs associated
Influential Evaluations (Sage).
                                                              them. CE takes advantage of both quantitative and
Prerequisites: None                                           qualitative methods, using a standardized protocol in
                                                              judging the strength and synthesis of the evidence
                                                              provided by existing studies.
Evaluation Approaches                                         The basic CE questions are: Is the available evidence good
                                                              enough to support high-stakes decisions? If we rely solely
and Techniques                                                on RCTs for evidence, will it result in a risk that available
                                                              non-RCT evidence will not be considered sufficient as a
Comparative Effectiveness: Balancing                          basis for policy decisions? Will sufficient evidence be
Design with Quality Evidence                                  available for decision-makers at the time when they need
Instructor: Ann M. Doucette, PhD                              it? What alternatives can be used to ensure that rigorous
                                                              findings be made available to decision-makers when they
Description: Evidence is the foundation on which we
                                                              need to act? CE has become an accepted alternative to
make judgments, decisions, and policy. Gathering              RCTs in medicine and health. While CE approach has
evidence can be a challenging and time-intensive process.     focused on medical intervention, the approach has
Although there are many approaches to gathering               potential for human and social interventions that are

                                                                           The Evaluators Instititute │ Course Catalog 10
implemented in other areas (education, justice,              the important and emergent patterns. Complex
environment, etc.).                                          environments for social interventions and innovations are
                                                             those in which what to do to solve problems is uncertain
This course will provide an overview of CE from an
                                                             and key stakeholders are in conflict about how to
international perspective (U.S., UK, Canada, France,
                                                             proceed.
Germany, Turkey), illustrating how different countries
have defined and established CE frameworks; how data         Developmental evaluation involves real-time feedback
are gathered, analyzed, and used in health care decision-    about what is emerging in complex dynamic systems as
making; and how information is disseminated and              innovators seek to bring about systems change.
whether it leads to change in health care delivery.          Participants will learn the unique niche of developmental
Though CE has been targeted toward enhancing the             evaluation and what perspectives such as Systems
impact of health care intervention, this course will         Thinking and Complex Nonlinear Dynamics can offer for
consistently focus on whether and how CE (definition,        alternative evaluation approaches. The instructor’s text,
methods, analytical models, dissemination strategies,        Developmental Evaluation: Applying Complexity Concepts
etc.) can be applied to other human and social program       to Enhance Innovation and Use (Guilford), is
areas (education, justice, poverty, environment, etc.).      recommended pre-course reading.

No prerequisites are required for this one-day course.

Developmental Evaluation: Systems and
Complexity
Instructor: Michael Quinn Patton, PhD
Description: The field of evaluation already has a rich
variety of contrasting models, competing purposes,
alternative methods, and divergent techniques that can
be applied to projects and organizational innovations
that vary in scope, comprehensiveness, and complexity.
The challenge, then, is to match evaluation to the nature
of the initiative being evaluated. This means that we need
to have options beyond the traditional approaches (e.g.,
linear logic models, experimental designs, and pre-post-
tests) when faced with systems change dynamics and
initiatives that display the characteristics of emergent
complexities. Important complexity concepts with
implications for evaluation include uncertainty,
nonlinearity, emergence, adaptation, dynamical               Evaluability Assessment
interactions, and co-evolution.
                                                             Instructor: Debra J. Rog, PhD
Developmental evaluation supports innovation                 Description: Increasingly, both public and private funders
development to guide adaptation to emergent and              are looking to evaluation not only as a tool for
dynamic realities in complex environments. Innovations       determining the accountability of interventions, but also
can take the form of new projects, programs, products,       to add to our evidence base on what works in particular
organizational changes, policy reforms, and system           fields. With scarce evaluation resources, however,
interventions. A complex system is characterized by a        funders are interested in targeting those resources in the
large number of interacting and interdependent               most judicious fashion and with the highest yield.
elements in which there is no central control. Patterns of   Evaluability assessment is a tool that can inform decisions
change emerge from rapid, real-time interactions that        on whether a program or initiative is suitable for an eval-
generate learning, evolution, and development—if one is      uation and the type of evaluation that would be most
paying attention and knows how to observe and capture        feasible, credible, and useful.

                                                                          The Evaluators Instititute │ Course Catalog 11
This course will provide students with the background,         leadership on intermediate and long-term
knowledge, and skills needed to conduct an evaluability        project/program outcomes.
assessment. Using materials and data from actual EA
                                                               Resource Allocation: This course examines the role of
studies and programs, students will participate in the
                                                               resource allocation in project/program outcomes and
various stages of the method, including the assessment
                                                               how to evaluate the resulting effects of resource
of the logic of a program’s design and the consistency of
                                                               allocation on systems change and project/program
its implementation; the examination of the availability,
                                                               outcomes. Whether at local, national or international
quality, and appropriateness of existing measurement
                                                               levels, programs often are supported by multiple sources,
and data capacities; the analysis of the plausibility that
                                                               each of which has its own goals, objectives and
the program/initiative can achieve its goals; and the
                                                               expectations for its investment. The strings attached to
assessment of appropriate options for either evaluating
                                                               those investments may either facilitate or hinder
the program, improving the program design/implementa-
                                                               program success. Participants will learn how to use a
tion, or strengthening the measurement. The
                                                               method of tracking, called resource mapping, to
development and analysis of logic models will be
                                                               determine the available resources, the strengths and
stressed, and an emphasis will be placed on what can
                                                               limitations of each resource, and whether the resources
emerge from the process.
                                                               allocated are sufficient for achieving a program’s stated
Students will be sent several articles prior to the course     goals and objectives. Participants will operationally
as a foundation for the method.                                define the concept of “cost” and the many different
                                                               ways to measure the concept.
Prerequisites: Background in evaluation is useful and
desirable, as is familiarity with conducting program-level     Methods: Resource maps help decision makers to
site visits.                                                   identify gaps, inefficiencies, overlaps, and opportunities
                                                               for collaboration with all participating partners.
Evaluating Resource Allocations in Complex                     Evaluators can use this information to identify which
Environments                                                   resources might be combined in pooled, braided or
                                                               blended arrangements that assure optimal outcomes for
Instructor: Doreen Cavanaugh, PhD
                                                               projects and/or programs.
Description: Evaluators are increasingly asked to examine
efficiency as well as the effectiveness of programs and        On Day 1, participants will use examples from their own
interventions. This course puts systems change under a         experience to apply the essential infrastructure elements
microscope by examining three essential infrastructure         of collaboration, leadership and resource allocation to a
elements of successful program effort: collaboration,          real life, evaluation situation.
leadership and resource allocation, as well as the             Day 2 will focus on ways to evaluate the contributions of
methods used to evaluate them.
                                                               collaboration, leadership and resource allocation
Collaboration: Local, national and international programs      strategies to systems change goals, outcome and impact.
often seek to achieve both efficiency and effectiveness
by improving collaboration across all participating            Evaluating Training Programs: Frameworks
stakeholders. This course will deconstruct different types     and Fundamentals
of collaboration, and ways to evaluate the impact of           Instructor: Ann M. Doucette, PhD
partnerships and collaboration arrangements on
                                                               Description: The evaluation of training programs typically
project/program outcomes.
                                                               emphasizes participants’ initial acceptance and reaction
Leadership: Collaborative frameworks yield new styles of       to content; learning, knowledge, and skill acquisition;
leadership, the effect of which needs to be taken into         participant performance and behavioral application of
account in evaluating a system. This course will provide       training; and benefits at the organizational and societal
participants with an understanding of differing leadership     levels that result from participation. The evaluation of
styles, linking the style to the project/program objectives,   training programs, especially behavioral application of
with an emphasis on methods of evaluating the effect of        content and organizational benefits from training,

                                                                            The Evaluators Instititute │ Course Catalog 12
continues to be a challenge. Today’s training approaches      This course begins with the fundamentals of designing
are wide-ranging, including classroom presentations, self-    and managing effective internal evaluation, including an
directed online courses, online tutorials and coaching,       examination of internal evaluation with its advantages
and supportive technical assistance. Evaluation               and disadvantages, understanding internal evaluation
approaches must be sufficiently facile to accommodate         within the organizational context, recognizing both
training modalities and the individual and organizational     positive and potentially negative roles for internal
outcomes from training efforts.                               evaluators, defining the tasks of managers and
                                                              evaluators, identifying the major steps in the internal
The Kirkpatrick (1959, 1976) training model has been a
                                                              evaluation process, strategies for selecting the right
longstanding evaluation approach; however, it is not
                                                              internal evaluation tools, and key methods for making
without criticism or suggested modification. The course
                                                              information essential for decision-making available to
provides an overview of two training frameworks: 1) the
                                                              management, staff, board members, and program
Kirkpatrick model and modifications, which emphasizes
                                                              participants.
participant reaction, learning, behavioral application, and
organizational benefits), and 2) the Concerns-based           The second day will focus on practical ways of designing
Adoption Model (CBAM), a diagnostic approach that             and managing internal evaluations that make a
assesses stages of participant concern about how              difference, including methods for reducing the potential
training will affect individual job performance, describes    for bias and threats to validity, practical steps for
how training will be configured and practiced within the      organizing the internal evaluation function, specific skills
workplace, and gauges the actual level of training use.       the internal evaluator needs, strategies to build internal
                                                              evaluation capacity in your organization, and ways for
The course is designed to be interactive and to provide a
                                                              building links between internal evaluation and
practical approach for planning (those leading or
                                                              organizational development. Teaching will be interactive,
commissioning training evaluations), implementing,
conducting, or managing training evaluations. The course      combining presentations with opportunities for
                                                              participation and discussion. Time will be set aside on the
covers an overview of training evaluation models; pre-
                                                              second day for an in-depth discussion of key issues and
training assessment and training program expectations;
                                                              concerns raised by participants. Course readings and
training evaluation planning; development of key
                                                              other materials are provided; the instructor’s book on
indicators, metrics, and measures; training evaluation
                                                              Internal Evaluation: Building Organizations from Within
design; data collection—instrumentation and
                                                              (Sage) is recommended.
administration, data quality; reporting progress, change,
and results; and disseminating findings and
                                                              Linking Evaluation Questions to Analysis
recommendations—knowledge management resulting
from training initiatives. Case examples will be used         Techniques
throughout the course to illustrate course content.           Instructor: Melvin M. Mark, PhD
                                                              Description: Statistics are a mainstay in the toolkit of pro-
Internal Evaluation: Building Organizations
                                                              gram and policy evaluators. Human memory being what
from Within                                                   it is, however, even evaluators with reasonable statistical
Instructor: Arnold Love, PhD                                  training, over the years, often forget the basics. And the
                                                              basics aren’t always enough. If evaluators are going to
Description: Internal evaluations are conducted by an
                                                              make sensible use of consultants, communicate
organization’s own staff members rather than by outside
                                                              effectively with funders, and understand others’
evaluators. Internal evaluators have the enormous
                                                              evaluation reports, then they often need at least a
advantage of an insider’s knowledge so they can rapidly
                                                              conceptual understanding of relatively complex, recent
focus evaluations on areas managers and staff know are
important, develop systems that spot problems before          statistical techniques. The purposes of this course are to
                                                              link common evaluation questions with appropriate
they occur, constantly evaluate ways to improve service
                                                              statistical procedures; to offer a strong conceptual
delivery processes, strengthen accountability for results,
                                                              grounding in several important statistical procedures;
and build organizational learning that empowers staff
and program participants alike.                               and to describe how to interpret the results from the
                                                              statistics in ways that are principled and will be

                                                                           The Evaluators Instititute │ Course Catalog 13
You can also read