EES is pleased to offer the following Professional Development Workshops (PDWs) as part of its 14th Biennial Conference. Please note, that each workshop is a full day workshop.
You can register for a PDW by following the same instructions as registering for the Conference here.
Blue Marble refers to the iconic image of the Earth from space without borders or boundaries, a whole Earth perspective. Blue Marble Evaluation consists of principles and criteria for evaluating transformational initiatives aimed at a more equitable and sustainable world.
We humans are using our planet’s resources, and polluting and warming it, in ways that are unsustainable. Many people, organizations, and networks are working to ensure the future is more sustainable and equitable. Blue Marble evaluators enter the fray by helping design, implement, and evaluate transformational initiatives based on a theory of transformation.
Blue Marble evaluation is utilization-focused, developmental, and principles-based in providing ongoing feedback for adaptation and enhanced systems transformation impact.
Incorporating the Blue Marble perspective means looking beyond nation-state boundaries and across sector and issue silos to connect the global and local, connect the human and ecological, and connect evaluative thinking and methods with those trying to bring about global systems transformation. Forecasts for the future of humanity run the gamut from doom-and-gloom to utopia. Evaluation as a transdisciplinary, global profession has much to offer in navigating the risks and opportunities that arise as global change initiatives and interventions are designed and undertaken to ensure a more sustainable and equitable future. This workshop will provide a framework and tools (a thoughtkit) for evaluating global systems transformation.
Johannes Schmitt and Martin Noltze
Why should we care about analyzing causal mechanisms? The primary argument for investing in analysing causal mechanisms as part of evaluation work is their capacity to answer questions as to why and how programs worked or failed to work. Commissioners, and evaluators interested in learning more about the modus operandi of a program may open the “black box” and expose the causal mechanisms at work. Particularly in challenging times of shifting context conditions, analysis of causal mechanisms can be key to readjust complex programs and make policies more effective.
But how to go about causal mechanism analysis? What are the options and how to apply them in practice? Building on the two trainers’ profound background knowledge, this 1-day Professional Development Workshop will provide the basics and jump right into analysing causal mechanisms using Realist Evaluation and Process tracing methodology.
The Workshop will bring together evaluators and commissioners of evaluation. The overall workshop objectives are:
The content of this workshop will be a succession of lectures, group discussions, and breakout exercises to provide participants with the understanding needed to recognize how complex behavior might play a role in the models they develop, the methodologies they devise, and what messages they extract from their data. Without this understanding, evaluators cannot correctly describe what programs are doing, and why. If they get it wrong, so too will evaluation users and stakeholders.
Understanding the application of Complexity Science to Evaluation requires understanding the relationship between specific constructs and general themes. Each specific construct is useful in its own right, but its full value lies in appreciating the epistemology in which that theme is embedded. This workshop will treat both specific constructs that can be applied to evaluation, and the epistemological themes reflected by the application of those constructs. The general themes are: 1) patterns in observed consequences of program and policy behavior,2) predictability of those consequences, and 3) reasons why those changes came about. The specific constructs will include: stigmergy, attractors, emergence, phase transition, self-organization, and sensitive dependence.
Annette L Gardner
Several factors have fueled the need for skilled evaluators that can design appropriate advocacy and policy change (APC) evaluations to meet diverse stakeholder needs: increased foundation interest in supporting APC initiatives to achieve transformational, systems-level change; evaluation of democracy-building initiatives worldwide; and diffusion of advocacy capacity beyond the traditional advocacy community (such as service providers). And evaluators have met these needs with great success, building a new field of evaluation practice, adapting and creating evaluation concepts and methods, and shaping advocate, funder and evaluator thinking on advocacy and policy change in all its diverse manifestations. The field of APC evaluation has matured and now has a rich repository of guides, instruments, and a book to support evaluation practice.
The aim of this Workshop is to expand evaluator capacity to design tailored advocacy and policy change evaluations under diverse scenarios. Concepts and definitions, case studies, designs, and tailored measures are combined into a practice-focused workshop. Using the definitive book on the topic, Advocacy and Policy Change Evaluation: Theory and Practice (Gardner and Brindis) as a guide, core principles important to understanding advocacy and policy change initiatives will be discussed. Participants will also discuss options for addressing the challenges associated with evaluation practice, such as the complexity and the moving target of the context in which advocacy activities occur, and the challenge of attribution and identification of causal factors. Unique advocacy and policy change evaluation methods and measures and their application will be described. Last, the designs and insights from the six evaluation cases described in the book, ranging from an Initiative to Promote Equitable and Sustainable Transportation to a girls empowerment initiative, the Let Girls Lead’s Adolescent Girls’ Advocacy and Leadership Initiative (AGALI) will be used to illustrate the real life scenarios evaluators are likely to encounter.
Goele Scheers and Richard Smith
Outcome Harvesting (OH) is a participatory monitoring and evaluation approach that is used to collect (“harvest”) evidence of what has changed (“outcomes”) and, then, working backwards, to determine whether and how an intervention has contributed to these changes. It is being used worldwide to robustly identify, describe, analyse and interpret outcome-level results regardless of whether they were pre-defined. Outcome Harvesting has proven to be especially useful in complex situations when it is not possible to define concretely most of what an intervention aims to achieve, or even, what specific actions will be taken over a multi-year period. Its use helps ensure outcome-level evaluations are firmly and transparently based on credible evidence as defined by the needs of primary users.
With publication of the first book on Outcome Harvesting, the lead-developer of OH, Ricardo Wilson-Grau, shared much of his experience and reasoning on why and how it is always necessary to adapt OH with each application. This training will be delivered by two experienced OH users who contributed to the book and is intended for those wishing to start working with Outcome Harvesting who need an understanding of the 6 OH steps and to develop their skills in identifying and formulating outcomes through practical exercises. The two facilitators know there is an increasing demand from organisations and consultants within Europe to be trained in Outcome Harvesting. For instance, in Denmark, where the EES conference will take place, there is a wide-spread adoption of Outcome Harvesting among CSOs which is creating a continuing demand for training of new personnel.
Andy Rowe, Patricia Rogers
While there is now greater awareness and willingness to address environmental sustainability in evaluations, there is considerable uncertainty and concern about how to do this. This workshop is designed to support evaluation practitioners and commissioners and will provide the approaches, tools and insights for incorporating consideration of environmental impacts and sustainability into evaluations that currently focus only on human systems.The workshop seeks to advance the contributions of evaluation to a sustainable planet by joining the considerable knowledge and experience of participants with the tools and approaches developed by Footprint Evaluation. The benefits will occur in the knowledge and expertise of both participants and members of Footprint and improvements to current and future tools, methods and processes from Footprint.
The workshop will cover evaluation undertakings from start to end, from commissioning and designing through to reporting and communications. It will provide a checklist to determine if you should include natural systems in an evaluation; approaches to incorporate biophysical science knowledge and identify and engage key stakeholders from both systems; key evaluation questions that will enable you to use evaluation criteria such as the OECD-DAC to explicitly address the both human and natural systems; guidance on mapping the reach of the evaluand to natural as well as human systems and using this mapping to incorporate sustainability into the Theory of Change; a typology and rubrics to evaluate the impact of your evaluand on natural systems; information sources and methods not usually included in evaluation such as modelling, GIS, scenario simulations, storytelling and how to address the temporal and spatial scales relevant for both human and natural systems. These tools and approaches will enable evaluation practitioners to address sustainability in their evaluations.
Scott Chaplowe, Silva Ferretti
The concept of a “new norm” is an oxymoron implying a new equilibrium where life will resume a degree of regularity. Our world is rapidly changing and the “new norm” will be increasing disruption. With the increased frequency and magnitude of change, humanitarian and development evaluation is being pushed beyond the boundaries of conventional methodologies to explore alternative approaches that embrace the volatility, uncertainty, complexity and ambiguity (VUCA) that characterized the complex contexts interventions are delivered and evaluated. Periodic, eventful evaluations packaged according to the common baseline-midterm-final evaluation recipe are not fit-for-purpose to respond and adapt to the needs of rapidly changing implementation.
This workshop will explore the different ways evaluation can be pursued to support emergent, real-time learning to understand, analyze, and act in rapidly changing and unfamiliar contexts. It will stress monitoring as evaluation to support course-correction and adaptive management. This includes a variety of approaches for timely evaluation to assess and respond quickly to contextual changes and determine whether planned results are occurring as intended. MasE also interrogates any unplanned consequences on the larger human and natural ecosystem so we can respond in a timely fashion, whether to capitalize on opportunities (if a positive unintended outcome) or mitigate damage (if a negative unintended outcome). The workshop will highlight how real-time communication of evaluation evidence and findings can increase responsiveness and uptake, and the critical importance of participatory engagement for key stakeholders to interpret and respond in real time.
Alena Lappo, Mariana Branco, Taruna Gupta
High quality and relevant evaluations used for evidence-based policy making is vital for achieving sustainable and equitable development especially in the context of the COVID-19 pandemic. To produce quality evaluations, competent and skilled evaluators should be available. However, an ongoing challenge remains for evaluation across the globe: the pool of talented evaluators is at time shallow, and demand exceeds supply. Encouraging youth to enter the evaluation profession has been recognized as one effective strategy to address this through the inauguration of EvalYouth movement in 2015.
Developing evaluation capacities of YEEs must be based on a systemic approach that takes into account three interdependent levels: individual, institutional, and external enabling environment, and two components: demand and supply. EvalYouth Global Network, UNFPA Evaluation Office, P2p+ initiative and Global Evaluation Initiative led a partnership to develop a training programme on “career development in M&E for YEEs”. This workshop builds on the partnership training products and on the EES competency frameworks to develop capabilities for evaluation.
The module of the developed training will be delivered in the EES conference as 1 day workshop and will provide young evaluation professionals and novice evaluators (YEES) with an overview of the evaluation landscape and possible career paths in evaluation. It will also give practical tips on successful participation in the conference.
Gunjan Veda, Matthew Cruse, Molly Wright
As the decolonization and the locally-led development agendas gain traction globally, there is an increased focus on the need for Community-led Monitoring and Evaluation (ColMEL). Two recent studies|by the Movement for Community-led Development (MCLD), a global consortium of 1500+ local civil society organizations and international NGOs, clearly demonstrate that in order to be truly community-led, organizations need to rethink the way they conduct monitoring and evaluation.
If communities are to lead their own development, they need to know how they are doing, what solutions are working and what aren't and how can they improve them. This means that communities and community-based organizations have to be part of all stages of the MEL cycle, right from deciding what programs should be evaluated for, how and by whom to data analysis, validation and decisions on how to use the results. This requires a radical shift in the way we think about evaluations- they should not be an instrument to measure “human worth, motivation or achievement,” but rather one to support learning and continuous improvement. This full day workshop by the Movement for Community-led Development will be divided into three parts. a) CLD and ColMEL: Basic principles b) Tools for evaluating CLD programming c) Simulation - applying the tools to strengthen CLD practice and evaluations. Participants will leave the workshop with a strengthened understanding of CLD and ColMEL and with tools that will enable them to put these into practice in their everyday work. These tools have been developed by a multi-country, multi-organizational collaborative research team and are currently being used by organizations all across the globe. This is a certified course offered by MCLD and all participants who complete the workshop will receive a training certificate from MCLD.
Tamara Walser, Dr. Mike Trevisan
The Purpose Of This Workshop Is To Explore And Engage The Transformative Use And Potential Of Evaluability Assessment (EA). EA Was Developed In The 1970s As A Pre-Evaluation Activity For Determining If A Program Was Ready For Outcome Evaluation, With A Focus On Management As Primary Intended Users. Much Like Evaluation In General, EA Theory And Practice Has Evolved To Address The Complex Needs Of Programs And Their Communities. No Longer Tied Exclusively To Management Decisions About Outcome Evaluation, EA Can Be Used As A Collaborative Evaluation Approach At Any Point In A Program’s Lifecycle. Transforming Our Understanding And Application Of EA Unlocks Its Potential To Engage Program And Organization Communities In Evaluation, Address Program Complexity, Support Culturally Responsive And Equity-Focused Evaluation, And Build Evaluation Capacity. Using Our Four-Component EA Model As A Framework, And Through Examples, Case Scenarios, Small Group Activities, And Discussion; Workshop Participants Will Consider And Apply A Transformative EA Approach.
In Our Book, Evaluability Assessment: Improving Evaluation Quality And Use Across Disciplines (Trevisan & Walser, 2015), We Introduce A Four-Component EA Model That Bridges Historical Conceptions Of EA With Current EA Theory And Use. This Workshop Will Respond To Theme 4, Transforming Evaluation Methods. We Will Provide A Brief Overview Of Current EA Theory And Practice, Including Its Resurgence Across Disciplines And Globally. The Focus Of The Workshop Will Be Implementing Transformative EA Using Our Four-Component Model As A Guiding Framework. We Will Use Examples, Case Scenarios, Small Group Activities, And Discussion To Allow Participants To Engage With The Content And Gain Insight Into How They Can Consider And Apply A Transformative EA Approach In Their Work.
Burt Perrin, Martha McGuire
Evaluators often are “requested” to make changes to their reports for a variety of possible reasons. Under what circumstances is this appropriate, or not? What can evaluators, and evaluation managers do to prevent inappropriate requests for changes, and what one can do if this does occur to reach a satisfactory solution. This session will discuss steps that both internal and external evaluators can be taken to harness and to effectively manage conflict, in such a way that both the integrity and utility of evaluation are respected. More specifically, this session will discuss the following:
The session will focus on the practical, so that participants can leave the session with ideas that they can implement. The session will highlight the importance of the art of evaluation, such as in being able to make sometimes difficult choices, which requires evaluators to have competencies in the areas of interpersonal skills, facilitation, and negotiation.
Leah Christina Neubauer, Thomas Archibald
This interactive skill-building workshop will introduce Communities of Practice (CoPs) and demonstrate their application for illuminating CoPs as a methodology to interrogate one’s evaluation practice, the evaluation profession and evaluator roles in society. Increasingly, evaluators are called to evaluate and participate in CoPs in their in-person or virtual global settings. Grounded in critical adult education and transformative learning, this session will focus on CoPs which engage learners in a process of knowledge construction, unlearning/relearning around common interests, ideas, passions, and goals. Participants will develop a CoP framework that includes the three core CoP elements (domain, community, practice), and processes for generating a shared, accessible repertoire of knowledge and resources. The three core elements and framework will provide a larger foundation to discuss monitoring, evaluation, learning (MEL) and evaluative thinking. Co-facilitators will highlight examples of CoP implementation in MEL from across the globe in development, education and community health through lenses of transformation. Participants will engage in a series of hands-on inquiry-oriented techniques, analyzing how CoPs can be operationalized in their evaluation practice.
Daniela Schroeter, Daniela Zahn
The Success Case Method (SCM), typically classified as a method centered approach to evaluation, allows incorporation of theory-driven, utilization-focused, participatory, and transformative ways of thinking into the evaluation of impact while using mixed-methods questionnaires and interviews to identify high and low impact examples with the ultimate goal of informing improvement of processes and transforming organizational impacts of a program. In contrast to traditional impact evaluation methods, such as randomized-controlled trials and strong quasi-experimental designs, the SCM provides a fast and cost-effective way to identify barriers to impact and opportunities for improvement during the development and implementation stages of interventions. The use of the method can increase the quality of interventions and reduce waste in developmental implementations that are awaiting experimental results. Developed for use in the field of training and development (Brinkerhoff, 2003 & 2005), the success case method has potential in broad range of program evaluation contexts (e.g., Coryn, Schröter, & Hansen, 2009; Piggot-Irvine, Aitken, & Marshall, 2009; Olson, Shershneva, & Brownstein, 2011; Medina, Acosta-Perez, Velez, Martinez, Rivera, Sardinas, & Pattatucci, 2015).
The purpose of this hands-on workshop is to introduce the SCM and present opportunities for practicing elements of the method in the workshop. By the end of the workshop, participants will understand the steps involved in applying the SCM, create a theory-driven impact model based on a case, draft question sets suitable for identifying high and low success cases via web-based questionnaires and develop possible interview questions for documenting stories of success and opportunities for improvements. The workshop concludes with a discussion of strengths, limitations, and opportunities associated with using the SCM with its traditional context of training and development as well as alternative program evaluation contexts. Participants will receive materials in advance and are asked to bring their laptops for use in the workshop.
Historically, organizations have conducted and used evaluation to meet internal and external accountability demands with approaches focused on impact assessment and value for money. In practice, rigid focus on accountability-oriented objectives can lead to evaluation outcomes that are at best symbolic. Yet we know from research and practice those evaluations which contribute significantly to learning about program functioning and context tend to leverage higher degrees of evaluation use and provide more credible, actionable outcomes. They can be used to improve the effectiveness and enhance the sustainability of interventions, for example. The focus for this workshop is on the value of the learning function and how this value can be enhanced through practical strategies. Participants can expect to develop their understanding of monitoring, evaluation and learning (MEL) strategies and approaches designed to leverage learning and program decision making in the context of mainstream and international development initiatives. Divergent and convergent strategies for conceptualizing and planning evaluations in the interest of program improvement will be covered.
Equitable Data Storytelling is a key component in making paradigmatic shifts toward advancing greater equity and collective well-being. This workshop will teach attendees how to use narrative stories, quantitative and qualitative data, and historical context to clearly identify and visualize key findings and concrete actions. This workshop is specifically aligned to two of the EES 2022 Conference themes: 1) Identity shift: transforming evaluators and 2) methodological shift: transforming methodologies. The alignment is further clarified below:
1) Identity Shift: Transforming Evaluators. Data is not neutral. Data and stories have been collected by certain people for certain people, all who carry their own priorities and biases. We have an opportunity to use data we collect to empower communities we serve. Teaching evaluators how to use data to tell equitable stories is pivotal in the shift from neutral observers to advocates and truth speakers. This workshop will teach evaluators how to use data as part of the solution. 2) Methodological Shift: Transforming Methodologies. Often, evaluators are put in a position of having access to a lot of data; it is hard to know how to transform endless spreadsheets into meaningful and engaging deliverables. Additionally, it is a difficult endeavor make system-level data applicable for local solutions. This workshop will walk through Pivot’s approach framing data with essential historical context so data can be integrated into a story, not just numbers on a page. The strategies taught in this session will help evaluators learn how to develop data stories that are accessible to agents of change and decision-makers.